Monday, July 29, 2013
Monday, July 29, 2013 10:22:00 AM (Eastern Standard Time, UTC-05:00)
 Monday, December 31, 2012
Monday, December 31, 2012 11:46:00 PM (Eastern Standard Time, UTC-05:00)
 Tuesday, September 11, 2012

Last week, I demonstrated how to embed code directly into a SQL Server Reporting Service (SSRS) report.

In this article, I will explain how to reference code in an external assembly from an SSRS report. The basic steps are

  1. Create External Code
  2. Create Unit Tests
  3. Deploy the assembly to the Report Server
  4. Add a reference to the assembly
  5. Call external functions in Expression Editor
  6. Deploy Report

Create External Code

The first step is to create and compile the external code. The project type will be a Class Library and you will add a public class with a public static method. This code can be in C#, Visual Basic, or F#.
A sample is shown below in Listing 1.

using System;

namespace ReportFunctions
{
    public class ReportLib
    {
        public static string FormatAs2Digits(decimal? input)
        {
            if (input == null)
                return "N/A";
            else
                return String.Format("{0:##,##0.00;(##,##0.00)}", input);
        }

    }
}
Listing 1

Compile this code in Release mode

Create Unit Tests

It's a good idea to create unit tests around this code because it can be difficult to test it on the Report Server.
At a minimum, write tests that mimic how you expect to call the function within your reports.

Deploy Assembly to Report Server

In order to use the functions, you must deploy the compiled DLL to the report server. You can either create a Setup  project to create an MSI package or you can simply copy the DLL to the drive where SQL Server Reporting Services is installed in the following folder on the SQL Server installation drive:

\Program Files\Microsoft SQL Server\Instance_Name\Reporting Services\ReportServer\bin

where Instance_Name is the name of the instance of SQL Server on which SSRS is running.

Add a reference to the assembly

Open your Report project and open the report that will call the custom function. From the menu, select Report | Report Properties. Select the References tab (Fig. 1).


Fig. 1 – “Reference” tab of Report Properties

Browse to select the deployed assembly containing the code you want to call.

After adding the reference, you will need to compile the Report project before you  can use the assembly functions. To compile the report, select Buld | Build Solution from the menu.

Call external functions in Expression Editor

Open an expression editor and call a function in the external assembly. You will need to include the entire namespace and classname. In our example, this be

An example is shown in Fig. 2.


Fig. 2 – Expression Editor

You can test that the expression works by clicking the Preview tab of the report.

Deploy Report

The final step is to deploy the report. Assuming you have permissions on the Report Server and the report sever is set in the project properties, the easiest way to deploy is to right-click the report in the Solution Explorer and select Deploy.

Now you can test the report and the function on the Report Server.

Conclusion

In this article, we described how to call code in an external assembly from a SQL Server Reporting Services report.

.Net | SQL Server | SSRS
Tuesday, September 11, 2012 4:20:41 PM (Eastern Standard Time, UTC-05:00)
 Monday, September 10, 2012
Monday, September 10, 2012 12:44:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, August 20, 2012
Monday, August 20, 2012 10:27:41 PM (Eastern Standard Time, UTC-05:00)
 Monday, August 06, 2012
Monday, August 06, 2012 1:49:43 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, July 24, 2012

Functional testing, system testing, and acceptance testing of software often involves manual actions by a user. To verify that an application works as designed, a user launches the application, navigates to a specific module, enters some data, and verifies the expected output.

Such testing tends to be expensive in terms of time, labor, and money. In addition, like all activities requiring human intervention, testing in this manner can often be error-prone and inconsistently executed.
Microsoft Visual Studio 2010 Premium and Ultimate contains the Code UI Testing tool that provides a way to automate these tests, so that regression testing can be performed more quickly, efficiently, and consistently.

A Coded UI Test gives developers and testers the ability to create tests that simulate user interactions with an application.
Coded UI Tests are stored in a Test Project. To create a new Test Project in Visual Studio,

  1. Select File | New | Project… The New Project dialog displays. 
  2. In the Installed Templates panel, expand the Visual C# or Visual Basic node and select the Test category.
  3. In the Project Type panel in the middle of the New Project dialog, select Test Project.
  4. Enter a name and location of the Test Project.
  5. Click the OK button. Visual Studio creates a new Test Project. You can view it in the Solution Explorer.

In our example, the project is named TestProject1. Link: Download TestProject1.

By default, a new Test Project contains a Unit Test class named UnitTest.cs. We won’t need this class, so it is safe to delete it.

Add a Coded UI Test to the project. (Project | Add Coded UI Test…) and provide a meaningful name for the map class.

The Generate Code for Coded UI Tests dialog displays. Select the Record actions, edit UI map or add assertions radio button. Click the OK button.

The Coded UI Test Builder toolbar displays in the bottom right of your screen.

Click the record button () to begin recording your actions. From this point, you can launch an application, navigate to a web site, click buttons, or enter data into forms – pretty much anything you do with your mouse and keyboard will be captured and turned into code by the recorder.

You can stop the recording either by clicking either the Pause button () or the Generate Code button ().
When the recording is paused, you can add an assertion by holding down the left mouse button over the Add Assertion button () and dragging the cursor to a control that contains a property against which you will create an assertion. A blue outline appears around any control as the cursor moves over it. Release the mouse while over the control to create an assertion about that control. An assertion declares an expected property value of the control. Comparing the actual property value against the expected value.

Clicking the Generate Code button generates C# code that reproduces the mouse and keyboard actions that you performed while the recorder was running.

The Generate Code button creates or updates three files in your project. The You can generate code button and create assertions as often as you  like. When you close the Coded UI Test Builder, Visual Studio generates three files share a similar name but different extensions: *.uitest, *.cs, and *.Designer.cs.  In our TestProject1 demo, the files are named files UIMap.uitest, UIMap.cs, and UIMap.Designer.cs. Because these files are related, UIMap.cs, and UIMap.Designer.cs appear beneath UIMap.uitest within the project hierarchy.
UIMap.uitest contains an XML file with information about the recorded steps.  If you don’t have the graphical interface to edit this file, you may want to download the Visual Studio 2010 Feature Pack 2 (http://msdn.microsoft.com/en-US/vstudio/ff655021.aspx ). This XML file is used to generate C# code stored in the UIMap.Deigner.cs  file.
Using the Coded UI Coded UI Test Editor, you can view each recorded step, change properties of a step; and split a method into multiple methods.

UIMap.Deigner.cs is overwritten every time any new recordings are created, so you should not edit this file.  Instead, move the method you wish to modify into the UIMap.cs file. This file is not overwritten, so your code can store anything you want here. UIMap.cs and UIMap.designer.cs are partial classes of the same class, so it doesn’t matter to the compiler in which file a method, property, or field is located.

Tuesday, July 24, 2012 9:37:00 AM (Eastern Standard Time, UTC-05:00)
 Monday, July 23, 2012
Monday, July 23, 2012 3:50:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, July 16, 2012
Monday, July 16, 2012 3:10:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, July 09, 2012
Monday, July 09, 2012 8:27:00 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, June 19, 2012

If you are in or near Michigan or northwest Ohio this week, you have a rare opportunity to learn from the creator of one of the most popular Aspect-Oriented-Programming (AOP) frameworks on the market. PostSharp inventor Gael Fraiteur will conduct a user group tour this week, primarily talking about AOP, using PostSharp for his examples. During the day, Gael will be stopping at area companies to educate them. AOP provides a way of adding functionality across a variety of classes and methods without cluttering said methods with a lot of extra code.

Originally from Belgium, Gael now resides in the Czech Republic, so it’s not often we get to hear him in-person.

Gael’s evening schedule is:

Date Group Location Link
Tue June 19 Northwest Ohio .NET User Group Toledo, OH http://nwnug.com/ 
Wed Jun 20 Great Lakes Area .NET User Group Southfield, MI http://migang.org
Thu Jun 21 Greater Lansing .NET User Group Okemos, MI http://glugnet.org/

I hope you can make it one of these nights.

Tuesday, June 19, 2012 9:29:00 AM (Eastern Standard Time, UTC-05:00)
 Friday, June 08, 2012

I published some of these links last month, but Microsoft has released links for even more free stuff. Check out the cool stuff by clicking the links below.

Friday, June 08, 2012 7:28:00 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, June 05, 2012

The .NET Framework provides configuration files - app.config and web.config - to store application-wide configurable information.

But these are just text files, so they can be read by anyone with the proper permissions. What if I want to store sensitive information in this file, such as a password or a connection string?f

Fortunately, the .NET Framework also provides a mechanism for encrypting parts of a config file. This functionality is available in the System.Configuration namespace in the System.Configuration assembly, so you will need to set a reference to this assembly (Project | Add Reference | .NET tab) and add the following line to the top of your class file
using System.Configuration;

The ConfigurationManager.OpenExeConfiguration static method accepts the name of an assembly and returns a Configuration object that can be used to manipulate the config file. It is important to remember that, when a project is built, the project's app.config file is renamed to {AssemblyName}.exe.config and copied to the bin\Debug or bin\Release folder (depending on the build configuration). It is the {AssemblyName}.exe that is passed into the OpenExeConfiguration method and it is the config file under the bin folder that will be affected by our code.

For example, the following code creates a Configuration object to read and manipulate the config file associated with the MyAwesomeApp.exe assembly

string appName = "MyAwesomeApp.exe";
Configuration config = ConfigurationManager.OpenExeConfiguration(appName);

We can call the Configuration object's GetSection method to get a reference to a particular section of the config file. For example, if we want to work with the connectionStrings section, we use the code

var section = (ConnectionStringsSection) config.GetSection("connectionStrings");

Now we can check to see if the section is already encrypted (IsProtected property), encrypt the section (ProtectSection method), or decrypt the section (UnprotectSection method). The following code encrypts the connectionString section

string appName = "MyAwesomeApp.exe";
Configuration config = ConfigurationManager.OpenExeConfiguration(appName);
ConnectionStringsSection section = config.GetSection("connectionStrings") as ConnectionStringsSection;
if (!section.SectionInformation.IsProtected)
{
    section.SectionInformation.ProtectSection("DataProtectionConfigurationProvider");
}
config.Save();


The code below decrypts the connectionString section

string appName = "MyAwesomeApp.exe";
Configuration config = ConfigurationManager.OpenExeConfiguration(appName);
ConnectionStringsSection section = config.GetSection("connectionStrings") as ConnectionStringsSection;
if (section.SectionInformation.IsProtected)
{
    //it is so we need to remove the encryption
    section.SectionInformation.UnprotectSection();
}
config.Save();

The final step is to write changes back to the file by calling the Configuration object's Save method.
config.Save();

Below is the unencrypted config file

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings>
    <add name="MyApp_Dev" connectionString="Data Source=Server01;Initial Catalog=AwesomeDB_Dev;Integrated Security=True"/>
    <add name="MyApp_QA" connectionString="Data Source=Server01;Initial Catalog=AwesomeDB_QA;Integrated Security=True"/>
    <add name="MyApp_Prod" connectionString="Data Source=Server01;Initial Catalog=AwesomeDB;Integrated Security=True"/>
  </connectionStrings>
  <appSettings>
    <add key="CompanyName" value="The Awesome Company"/>
    <add key="CompanyPhone" value="313-555-4321"/>
  </appSettings>
</configuration>

And here is the same config file with the connectionStrings section encrypted
<?xml version="1.0" encoding="utf-8" ?>
<configuration>

  <connectionStrings configProtectionProvider="DataProtectionConfigurationProvider">
    <EncryptedData>
      <CipherData>
        <CipherValue>AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAA/NM6tBoQfE2WAlH0NVuRWQQAAAACAAAAAAAQZgAAAAEAACAAAADdZOBg8ugEiLd8RlHk95C+fuXUTC706/hDeZT2XN8G0gAAAAAOgAAAAAIAACAAAABEW5bTk3uNJDoKMt26yvD+YY1v0fqe2et8KWeJOewx3UADAADGEgVw4K4nlwQjjKpu6BZYMYXeB1eovlbYrEbg/A+Kk6UhqBTdAqt6UmW/6B4M2pXWpP9VqDTDfr7GKEK06qDdXRnYfGYH1JAg2xPoI3aeA5DQP6HAIbSymXejw/B+s47L4rTT2R4PvfRfMYiMppxCrEh+eopKdvcg34JsD+o6Il+6a4TiTiYLzQ9BESoxepfY9pqaADFrLChPYzwjymAqfsAFE/n++APjb7aktViu5+AI+QM3RDEhDFzsP7Wy+UkGsPrIoyMUaLMFNibK7LZFiBL6+VHZEur+4xyI+Uu+UH194oBOX1g77nBzuTsivNgH7048JhbzeSk9mjOZrGACX433vC6neqGaJ42sC9sC16JX9CGEZrnipeoyEeR0RT1H3A38Xasn9lYkUyE3LBIJ1k0iZJoAAnCvXfmSzXoXsHjpvmmlst3SAo24TL7WsdaVliBR/6i5N6AswTOemjvFe0Rhb5zoeGX36aI4SD71DYaLxqss1MVm5gc6CEsringGeHbFRqu/kTylA6xgr8rfL/eTcjQs2zRIUTzc4/DUSsyNIUWQ0+z+BDoR/AGTSx7v2OKp2vpmHLf7kn3DxYITlpDV6voJrbmpMx9lN3Z7DYQD4vNLczGPHpKbBCEXDHNw1E7QDrQaz38Ka4pRKnCRa/GL7X8euA82bYaJmEmUEBqhcZg3mQHR31X5tUbZ3HkvMxEUcRmJITpj29V6RKEmugkWmxl3OCJzuZ3vcUSgKnQIAJkqr3YBuZR3YJjDCo7i5EElFAcpv89rM2RtSVNC7i6KLJLsISjOgzD5CktrDjgMZLtxN6BdyljvWNj0L29APvSxd5ovxgCOKXE2UYpH3l4HgX78P82wx1vBNhO5UmqP7xdz1jTi9cGP6HHNiv8qJZjlTOO2/afnDDYt/pKJqWWTRUhJiZZipC+dQ6bBYOhMZbclROzptu0fIYwqdeDxPcXalN3kjIXW4kwblSd4TxmCO01JD9eL+MFT/PSaipj7CqaAHTnpL0n40jV4WK4EaxUTuXi0/NXBUjMFw4ZTdlEnLv1jjCKZ+r2Q1YbrrI286PATrwLDbYsVVGEefp8vBsOD7xfHmIPeYolbEOq2QAAAAHoGwcm8HT1RS0pXvgOjRR37Gy1BLfG/5xsYdZqOB9nT6xthVULhqwPlAsFwDXLfNZQLnpZiocwHDf07DeQiNK8=</CipherValue>
      </CipherData>
    </EncryptedData>
  </connectionStrings>
  <appSettings>
    <add key="CompanyName" value="The Awesome Company"/>
    <add key="CompanyPhone" value="313-555-4321"/>
  </appSettings>
</configuration>

Here is a complete code snippet for getting a config and toggling the decryption of the connectionStrings section

string appName = "MyAwesomeApp.exe";
Configuration config = ConfigurationManager.OpenExeConfiguration(appName);
ConnectionStringsSection section = config.GetSection("connectionStrings") as ConnectionStringsSection;
if (section.SectionInformation.IsProtected)
{
    section.SectionInformation.UnprotectSection();
}
else
{
    section.SectionInformation.ProtectSection("DataProtectionConfigurationProvider");
}
config.Save();

One of the nice things about using these libraries to encrypt config file sections is that we don’t need to change our code that reads the values in this section. For example, the following code

string connectionString = section.ConnectionStrings["MyApp_Dev"].ConnectionString;
.Net | C#
Tuesday, June 05, 2012 10:01:00 AM (Eastern Standard Time, UTC-05:00)
 Thursday, May 31, 2012

Here is Kathleen Dollard’s presentation on .NET Framework Core Features at the April 2012 Great Lakes Area .NET User Group (GANG).

Thursday, May 31, 2012 10:51:00 PM (Eastern Standard Time, UTC-05:00)
 Friday, May 25, 2012

An App.config file or a Web.config file are great places to store configurable information – information that generally doesn’t change; but we want to be able to change easily (i.e., without rebuilding and redeploying the application.) Examples include connection strings (stored in the config file’s <connectionStrings> section) and application-wide name-value pairs (stored in the config file’s <appSettings> section).

We can add more flexibility by moving a section to an external file and linking to that file from the config file.

By splitting the file, we can manage and deploy only those settings separate from the rest of the configuration.

To do so, we create a new text file and copy that section into that file; then use the configSource attribute of the section tag in the original config file to point to the new file.

For example, the following app.config contains all the application’s connection strings and application settings

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings>
    <add name="MyApp_Dev" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
    <add name="MyApp_QA" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
    <add name="MyApp_Prod" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
  </connectionStrings>
  <appSettings>
    <add key="CompanyName" value="The Awesome Company"/>
    <add key="PhoneNumber" value="(513) 555-4444"/>
  </appSettings>
</configuration>

We can accomplish the same functionality as the above app.config by creating 2 files: connections.config and appSettings.config and adding the following code to each file, respectively

connections.config:

<connectionStrings>
  <add name="MyApp_Dev" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
  <add name="MyApp_QA" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
  <add name="MyApp_Prod" connectionString="Data Source=SQL071;Initial Catalog=Awesome_Dev;Integrated Security=True"/>
</connectionStrings>

appSettings.config:

<appSettings>
  <add key="CompanyName" value="The Awesome Company"/>
  <add key="PhoneNumber" value="(513) 555-4444"/>
</appSettings>

Then, point to these files in the app.config, as shown below:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings configSource="connections.config" />
  <appSettings configSource="appSettings.config" />
</configuration>
One caveat to doing this: The configSource files (connections.config and appSettings.config in our example) must be in the same folder as the config file. We can accomplish this by selecting each configSource file in Solution Explorer and setting its Copy to Output directory property to either “Copy always” or “Copy if newer”.
Friday, May 25, 2012 9:19:00 AM (Eastern Standard Time, UTC-05:00)
 Thursday, May 17, 2012

In October, the Great Lakes Area .NET User Group (GANG) celebrated 10 years this past October with an all-day event. Here is Godfrey Nolan’s presentation on Executable Requirements or BDD in .NET.

.Net | Agile | ALM | Video
Thursday, May 17, 2012 10:54:00 AM (Eastern Standard Time, UTC-05:00)
 Saturday, May 12, 2012

Here is Bill Wagner's presentation at GANG10, the October 1 event celebrating 10 years of the Great Lakes Area .NET User Group. Bill talks about asynchronous programming, including the new features coming in C# 5.

.Net | C# | Video
Saturday, May 12, 2012 9:39:00 AM (Eastern Standard Time, UTC-05:00)
 Sunday, April 29, 2012

Here is a video of Seth Juarez's Machine Learning presentation at the Great Lakes Area .NET User Group in January 2012.

Sunday, April 29, 2012 12:58:00 PM (Eastern Standard Time, UTC-05:00)
 Wednesday, April 18, 2012
Wednesday, April 18, 2012 9:20:00 AM (Eastern Standard Time, UTC-05:00)
 Thursday, April 12, 2012

SQL Injection  is one of the most frequently-exploited vulnerabilities in the software world. It refers to user-entered data making its way into commands sent to back-end systems. It is common because so many developers are unaware of the risk and how to mitigate it.

Most of the applications I work with read from and write to a relational database, such as Microsoft SQL Server.  I frequently run across ADO.NET code like the following:

string lastName = "'Adams'";
string sql = "Select * from dbo.Customer where LastName = '" + lastName + "'";
string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
using (var conn = new SqlConnection(connString))
{
    conn.Open();
    var cmd = conn.CreateCommand();
    cmd.CommandText = sql;
    SqlDataReader reader = cmd.ExecuteReader();
    while (reader.Read())
    {
        Console.WriteLine("Bad Name: {0} {1}", reader["FirstName"], reader["LastName"]);
    }
}

This code is designed to call a stored procedure like the following:

CREATE PROCEDURE [dbo].[GetCustomersByFirstName]
    @FirstName NVARCHAR(50)
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    SET NOCOUNT ON;

    SELECT 
            Id, 
            FirstName, 
            LastName
        FROM dbo.Customer
        WHERE FirstName = @FirstName
        ORDER BY Id
END

GO

This method of code has several disadvantages

  1. This code is not optimal because SQL Server does not have a chance to reuse a cached query plan unless the user happens to send the exact same text into SQL Server.
  2. The string concatenation opens the system to SQL Injection attacks.

A SQL Injection Attack is an attempt by an unscrupulous user to pass malicious commands to a database. In the above example, imagine that the variable x was provided by a user inputting text into a text box on a web age. An evil user might type something like

"Smith';DROP TABLE Customer;//"

If that code runs with sufficient permissions, it would wreak havoc on your database. The following query would be passed to SQL Server.
Select * from dbo.Customer where LastName = 'Smith';DROP Table Customer;//'

Clearly, dropping the customer table is not what your code is intended to do.

Many of you will read the above example and decide that you are safe because

  1. Your web code runs under a context with insufficient privileges to drop a table; and
  2. You are validating all user inputs to ensure a user cannot enter anything bad.

There are problems with this reasoning.

  1. A clever hacker can sometimes trick a user into running code under elevated privileges. Often there are multiple steps to an attack.
  2. Even if you have caught every possible injection possibility in your user interface, you cannot guarantee that every call to this API will be made only from your UI for all eternity. You may open up the API to the public or you may subcontract writing a mobile application that calls this API or you may hire a new programmer who doesn't know better.

The point is that you need to check security at every level of your application. And part of checking security is to not trust your inputs.

A far better approach than concatenating strings to form a SQL statement is to create parameter instances; set the value of each parameter; and add these parameters to a Parameters collection.

The code below shows how to do this.

string lastName = "Adams";
string sql = "Select * from dbo.Customer where LastName = @LastName";
string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
using (var conn = new SqlConnection(connString))
{
    conn.Open();
    var cmd = conn.CreateCommand();
    cmd.CommandText = sql;
    SqlParameter lnParam = cmd.CreateParameter();
    lnParam.ParameterName = "@LastName";
    lnParam.Value = lastName;
    cmd.Parameters.Add(lnParam);
    SqlDataReader reader = cmd.ExecuteReader();
    while (reader.Read())
    {
        Console.WriteLine("Good Name: {0} {1}", reader["FirstName"], reader["LastName"]);
    }
    Console.WriteLine();

Pass an unexpected parameter here and it will no t be executed on the end of the query because SQL Server is expecting a parameter for a specific use.

The same pattern works if I want to pass in a dynamic string of SQL. Passing Parameter instances is more secure than concatenating SQL and passing that string to SQL Server.

Below is a console application that uses the vulnerable string concatenation method to call SQL Server via ADO.NET

using System;
using System.Configuration;
using System.Data.SqlClient;

namespace PassingSql_WrongWay
{
    class Program
    {
        static void Main(string[] args)
        {
            CallSqlQuery();
            CallStoredProc();
            Console.ReadLine();
        }

        private static void CallSqlQuery()
        {
            string lastName = "'Adams'";
            //string lastName = "Adams';DROP TABLE dbo.ExtraTable;--";
            string sql = "Select * from dbo.Customer where LastName = '" + lastName + "'";
            string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
            using (var conn = new SqlConnection(connString))
            {
                conn.Open();
                var cmd = conn.CreateCommand();
                cmd.CommandText = sql;
                SqlDataReader reader = cmd.ExecuteReader();
                while (reader.Read())
                {
                    Console.WriteLine("Bad Name: {0} {1}", reader["FirstName"], reader["LastName"]);
                }
            }
            Console.WriteLine();
        }

        private static void CallStoredProc()
        {
            string firstName = "James";
            string sql = "EXEC GetCustomersByFirstName '" + firstName + "'";
            string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
            using (var conn = new SqlConnection(connString))
            {
                conn.Open();
                var cmd = conn.CreateCommand();
                cmd.CommandText = sql;
                SqlDataReader reader = cmd.ExecuteReader();
                while (reader.Read())
                {
                    Console.WriteLine("Bad Name: {0} {1}", reader["FirstName"], reader["LastName"]);
                }
                Console.WriteLine();
            }
        }
    }
}

Below is a similar console app, using the more secure parameters pattern

using System;
using System.Configuration;
using System.Data.SqlClient;

namespace PassingSql_RightWay
{
    class Program
    {
        static void Main(string[] args)
        {
            CallSqlQuery();
            CallStoredProc();
            Console.ReadLine();
        }

        private static void CallSqlQuery()
        {
            string lastName = "Adams";
            //string lastName = "Adams;DROP TABLE dbo.ExtraTable;--";
            string sql = "Select * from dbo.Customer where LastName = @LastName";
            string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
            using (var conn = new SqlConnection(connString))
            {
                conn.Open();
                var cmd = conn.CreateCommand();
                cmd.CommandText = sql;
                SqlParameter lnParam = cmd.CreateParameter();
                lnParam.ParameterName = "@LastName";
                lnParam.Value = lastName;
                cmd.Parameters.Add(lnParam);
                SqlDataReader reader = cmd.ExecuteReader();
                while (reader.Read())
                {
                    Console.WriteLine("Good Name: {0} {1}", reader["FirstName"], reader["LastName"]);
                }
                Console.WriteLine();
            }
        }

        private static void CallStoredProc()
        {
            string firstName = "James";
            string storedProcName = "GetCustomersByFirstName";
            string connString = ConfigurationManager.ConnectionStrings["LocalConn"].ConnectionString;
            using (var conn = new SqlConnection(connString))
            {
                conn.Open();
                var cmd = conn.CreateCommand();
                cmd.CommandText = storedProcName;
                cmd.CommandType = System.Data.CommandType.StoredProcedure;
                SqlParameter lnParam = cmd.CreateParameter();
                lnParam.ParameterName = "@FirstName";
                lnParam.Value = firstName;
                cmd.Parameters.Add(lnParam);
                SqlDataReader reader = cmd.ExecuteReader();
                while (reader.Read())
                {
                    Console.WriteLine("Good Name: {0} {1}", reader["FirstName"], reader["LastName"]);
                }
                Console.WriteLine();
            }
        }
    }
}

If you wish to use the above code, create a new database named TestData and run the following SQL DDL to create the database objects.

USE [TestData]
GO

/****** Object:  Table [dbo].[ExtraTable]    
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[ExtraTable](
    [foo] [nchar](10) NULL,
    [bar] [nchar](10) NULL
) ON [PRIMARY]
GO

/****** Object:  Table [dbo].[Customer]    
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Customer](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [FirstName] [nvarchar](50) NULL,
    [LastName] [nvarchar](50) NOT NULL
) ON [PRIMARY]
GO

INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('George', 'Washington') 
GO 
INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('John', 'Adams') 
GO 
INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('Thomas', 'Jefferson') 
GO 
INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('James', 'Madison') 
GO 
INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('James', 'Monroe') 
GO 
INSERT INTO dbo.Customer (FirstName, LastName) VALUES ('John Quincy', 'Adams') 
GO 

/****** Object:  StoredProcedure [dbo].[GetCustomersByFirstName]   
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[GetCustomersByFirstName]
    @FirstName NVARCHAR(50)
AS
BEGIN
    SET NOCOUNT ON;

    SELECT 
            Id, 
            FirstName, 
            LastName
        FROM dbo.Customer
        WHERE FirstName = @FirstName
        ORDER BY Id
END
GO

With a little bit of thought and a few lines of code, you can significantly reduce the risk of SQL injection in your ADO.NET code.

.Net | C# | SQL Server
Thursday, April 12, 2012 6:13:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, April 09, 2012
Monday, April 09, 2012 9:03:00 AM (Eastern Standard Time, UTC-05:00)
 Sunday, March 04, 2012

Sandoxes and Proxies

Microsoft SharePoint Server offers a sandbox solution, which restricts users and developers from accessing resources outside the site collection to which it is deployed. This can be an advantage to SharePoint administrators because they can provide a development environment to users and not have to worry about monitoring applications for potential security risks: risky activities are simply not allowed in a sandbox solution.

But sometimes a sandbox application has a legitimate need to access resources outside the site collection to which it is deployed. For example, an application may need to access a file on disc or call an external web service.

SharePoint does provide a way to accomplish this. To access resources outside a site collection from within a sandbox solution, you must implement and call a Full Trust Proxy.

A Full Trust Proxy is deployed to the web farm and has more rights than a sandbox solution. It also exposes a simple interface that can be called from sandbox code.

Creating the Full Trust Proxy

To create a Full Trust Proxy, simply create a new empty SharePoint project.
 

The SharePoint Configuration wizard launches. At the "Trust level" prompt, select "Deploy as a farm solution". Click [Finish] to create the project.

 
Add to your project, a public class to act as a proxy. It doesn't matter what you name this class, but I like the name to end in "Operation" indicating that it is called to perform some secure operation.

At the top of the Operation class file add the following "using" statement:

 using Microsoft.SharePoint.UserCode;

Change the Operation code so that it inherits from SPProxyOperation; then, override the Execute method, as shown below.

public class DivisionOperation : SPProxyOperation
{
    public override object Execute(SPProxyOperationArgs args)
    {
                                ...
    }
}

This is all that is necessary to create a Full Trust Proxy class. Your sandbox client will call the Execute method, which (when the project is deployed to the SharePoint farm) will have rights to call outside the sandbox code.

Notice that the Execute method accepts a single argument of type SPProxyOperationArgs and returns an object. This interface was kept generic so that it could be used in almost any situation to pass in and return out almost any type of arguments. The trick is to define the argument types appropriate to your application and cast them when necessary. If we want to pass in multiple values or return multiple values, we should create classes to hold those values.

For the input arguments, create a new public class in your project that inherits from Microsoft.SharePoint.UserCode.SPProxyOperationArgs. Again, the name doesn't matter, but I like to use the same root name as the Operation class and end the name with "Args". Decorate this class with the [Serializable] attribute, so it can be passed across process boundaries without loss of data. Then add public properties that you want your client to pass into the Execute method of the Operation class. It isn't required, but I like to add a constructor that allows me to initialize all the properties when instantiating this object. Below is an example of such a class.

[Serializable]
public class DivisionArgs : SPProxyOperationArgs
{
    public DivisionArgs(int divisor, int dividend)
    {
        Divisor = divisor;
        Dividend = dividend;
    }
    public int Divisor { get; set; }
    public int Dividend { get; set; }
}

Next, you may wish to create a class to hold multiple return values. This class is not necessary if you only want to return a primitive type, such as a number or string. I like to keep the same naming convention, using the same base name as the other classes and a suffix of "ReturnValues". This class should also be marked Serializable. Below is an example of such a class.

[Serializable]
public class DivisionReturnValues
{
    public int Quotient { get; set; }
    public int Remainder { get; set; }
}


In the above example, the client can pass to the Execute method an object of type DivisionArg and cast Execute's return value to a DivisionReturnValues object. This is possible because Execute accepts an SPProxyOperationArgs input and returns an object and because the DivisionArgs and DivisionReturnValues classes inherit from these objects.

To utilize the values passed in, the first thing the Execute method should do is cast the input arguments to the well-known type we are using. In this case, that type is DivisionArgs. Then, the Execute will perform whatever work it needs to perform and return the well-known type that the client is expecting. Below is a sample proxy doing this.

public class DivisionOperation : SPProxyOperation
{
    public override object Execute(SPProxyOperationArgs args)
    {
        // Cast the input args to the specific type
        var divArgs = args as DivisionArgs;

        // Perform some action
        DivisionReturnValues returnValue = Divide(divArgs);

        // Return an object
        return returnValue;
    }

    /// <summary>
    /// Divide one number by another to determine the quotient and remainder
    /// </summary>
    /// <param name="divArgs">DivisionArgs containing Divisor and Dividend</param>
    /// <returns>DivisionReturnValues object containing divisor and remainder</returns>
    protected DivisionReturnValues Divide(DivisionArgs divArgs)
    {
        int divisor = divArgs.Divisor;
        int dividend = divArgs.Dividend;
        int remainder;
        int q = Math.DivRem(divisor, dividend, out remainder);
        var returnValue = new DivisionReturnValues() { Quotient = q, Remainder = remainder };
        return returnValue;
    }
}

To keep things simple, our example proxy is just doing some simple division. In the real world, this task could be performed just as easily within a sandbox solution. Just imagine that the Proxy is doing something that the Sandbox is incapable of, such as calling an external web service or accessing the file system or accessing some data outside the sandbox site.
Finally, you will need to add code to Activate and Deactivate a feature when the Full Trust Proxy is deployed.
In the Solution Explorer, right-click the Features folder of the Full Trust Proxy project and select Add Feature. Right-click the newly-created Feature node and select Add Event Receiver. Visual Studio creates a class that inherits from SPFeatureReceiver. Notice the commented-out event handler code in this class.
Add event handlers for FeatureActivated and FeatureDeactivating events, as in the example below

public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
    SPUserCodeService userCodeService = SPUserCodeService.Local;
    if (userCodeService != null)
    {
        // Define a variable to describe the proxy
        SPProxyOperationType proxyOperation
            = new SPProxyOperationType(
                    this.GetType().Assembly.FullName,
                    typeof(RequestRightsOperation).FullName);

        // Add the proxy to the UserCodeService
        userCodeService.ProxyOperationTypes.Add(proxyOperation);
        // Save changes
        userCodeService.Update();
    }
}

public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
    // Retrieve e reference to the UserCodeService
    SPUserCodeService userCodeService = SPUserCodeService.Local;
    if (userCodeService != null)
    {
        // Define a variable to describe the proxy
        SPProxyOperationType proxyOperation = new SPProxyOperationType(
            this.GetType().Assembly.FullName,
            typeof(RequestRightsOperation).FullName);
        // Remove the proxy to the UserCodeService
        userCodeService.ProxyOperationTypes.Remove(proxyOperation);
        // Save changes
        userCodeService.Update();
    }
}

The proxy must be deployed to the SharePoint server and registered in the Global Assembly Cache. The following Powershell commands will accomplish this. Of course, you will need to replace the path with the path of the bin directory on your computer.

add-spsolution -literalpath  C:\Giard\MyFullTrustProxy\MyFullTrustProxy\bin\Debug\MyFullTrustProxy.wsp

or

Update-SPSolution -identity MyFullTrustProxy.wsp -gacdeployment -force -literalpath C:\Giard\MyFullTrustProxy\MyFullTrustProxy\bin\Debug\MyFullTrustProxy.wsp
Install-SPSolution -identity MyFullTrustProxy.wsp -gacdeployment -force

You can see the solution in the Global Assembly Cache (GAC) by navigating to c:\windows\Assembly on the server in Windows Explorer, as shown in Figure 3. Make a note of the Public Key Token. You will need this value when you call the proxy.


 
The Client: A Sandbox Web Part

To create the client, create an Empty SharePoint project and add a Web Part (Fig 3). Do NOT select Visual Web Part as these are not supported within a Sandbox project.

Within the web part project, set a reference to the Full Trust Proxy class, so you can call the Proxy class's Execute method and access the 'Arguments' and 'ReturnValues' classes.
Add a "using" statement to the top of your web part to import the namespace of the classes in the full trust proxy.

Call the Proxy with code similar to the following:

object proxyResults =
    SPUtility.ExecuteRegisteredProxyOperation(
        "AssemblyName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=tokenGUID",
        "OperationNamespaceAndClassName",
        ProxyArgumentsInstance);

You will need to replace the Namespace, the Operations Class Name, and the Arguments class with the names of the classes you created and the PublicKeyToken with the value you saw in the Global Assembly Cache. Below is an example:

var myProxy = new MyFullTrustProxy.DivisionArgs(dividend, divisor);
object proxyResults =
    SPUtility.ExecuteRegisteredProxyOperation(
        "MyFullTrustProxy, Version=1.0.0.0, Culture=neutral, PublicKeyToken=62f295c504bc90d9",
        "MyFullTrustProxy.DivisionOperation",
        myProxy);

Since the proxy Operation returns an object, you will want to cast this to the expected return value.
The code below gets the gets two numbers from textboxes on a web part and passes them to a Full Trust Proxy operation that divides them and returns an object containing the quotient and remainder.

var results = proxyResults as DivisionReturnValues;

Now, you can get properties out of the return results.

int quotient = results.Quotient;
int remainder = results.Remainder;

Below is a full listing of the code in the sample application to call the proxy and retrieve the strongly-typed results

statusLabel.Text = "Calculating...";
try
{
    int divisor = Convert.ToInt32(divisorTextbox.Text);
    int dividend = Convert.ToInt32(dividendTextbox.Text);
    var myProxy = new MyFullTrustProxy.DivisionArgs(dividend, divisor);
    //quotientLabel.Text = (Convert.ToInt32 (divisorTextbox.Text) / Convert.ToInt32 (dividendTextbox.Text)).ToString();
    object proxyResults =
        SPUtility.ExecuteRegisteredProxyOperation(
            "MyFullTrustProxy, Version=1.0.0.0, Culture=neutral, PublicKeyToken=62f295c504bc90d9",
            "MyFullTrustProxy.DivisionOperation",
            myProxy);

    var results = proxyResults as DivisionReturnValues;
    int quotient = results.Quotient;
    int remainder = results.Remainder;
    quotientLabel.Text = quotientLabel.ToString();
    remainderLabel.Text = remainder.ToString();
    statusLabel.Text = "Done";
}
catch (Exception ex)
{
    statusLabel.Text = "An error occurred: " + ex.Message;
}


Sample Application

The sample application contains a Full Trust Proxy project (MyFullTrustProxy) that should be deployed to your SharePoint farm and activated; and a Sandbox solution  (MySandboxWebparts) containing a web part (MathWebPart.cs) that calls the Full Trust Proxy. You can download the project here.

Conclusion

In this article, we looked at how to create and deploy a Full Trust Proxy assembly and how to call that assembly from a SharePoint Sandbox solution.

 

Sunday, March 04, 2012 2:22:58 AM (Eastern Standard Time, UTC-05:00)
 Monday, January 16, 2012
Monday, January 16, 2012 11:15:00 AM (Eastern Standard Time, UTC-05:00)
 Monday, January 09, 2012
Monday, January 09, 2012 5:17:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, April 11, 2011
Monday, April 11, 2011 11:19:00 AM (Eastern Standard Time, UTC-05:00)
 Monday, April 04, 2011
Monday, April 04, 2011 4:29:00 PM (Eastern Standard Time, UTC-05:00)
 Monday, January 31, 2011
 Tuesday, January 18, 2011
Tuesday, January 18, 2011 3:40:00 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, May 12, 2010

Episode 87

In this interview, Day of .Net organizers John Hopkins and Jason Follas describe what went into planning this event and what were the results.

Tuesday, May 11, 2010 11:04:36 PM (Eastern Standard Time, UTC-05:00)
 Wednesday, February 24, 2010

Episode 74

Debbie Must describes the unique challenges of deploying her software and how she attacked these challenges.

Wednesday, February 24, 2010 11:51:27 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, February 09, 2010

Unit testing the critical methods of your application is important.

Generally, I focus on testing public methods because this is the interface that others use to interact with my library. Testing only public methods also safeguards me from modifying unit tests every time I refactor or optimize the encapsulated code of my libraries.

Unfortunately, sometimes critical methods are marked Private. Because I always create my unit tests in a separate project, this presents a problem: Private methods are only accessible to other methods in the same class; You cannot call a Private method from an external assembly.

You have several options when testing Private methods from a separate project.

  • Change the method's accessor to Public
  • Create a public 'accessor' to the method
  • Use reflection to access the method
  • Test a public method that calls this method.
  • Change the method's accessor to Internal and make Internal methods visible to your test project.

Each of these approaches has its shortcomings

Change the method's accessor to Public

This is probably too extreme as it breaks any abstraction you were trying to create. Too many public methods can clutter an API, making it overly complex.

Create a public 'accessor' to the method

This involves creating a public class and method decorated with the [Shadowing] attribute. It definitely adds a level of complexity to your class. When you ask MSTest to create a new Unit Test of a private method, you will be prompted to create an accessor.

Test a public method that calls this method

This is a popular choice. The idea is that public methods call private methods, so testing your public methods will call and test your private methods. To get good code coverage, you will need to know which methods are called (an approach known as "White Box testing".) Some people don't like to call this a unit test because multiple methods are called.

Use reflection to access the method

This is the most complicated of the methods listed here; But, if you don't have the source code and you feel you must test a private method, this is your only option.

Change the method's accessor to Internal and make Internal methods visible to your test project.

This method is a good compromise. By default, Internal methods are available only to other methods in the same assembly. However, you can an external project explicit permission to access Internal methods by adding the following line to AssemblyInfo.cs class of the file containing the method you want to test.

[assembly: System.Runtime.CompilerServices.InternalsVisibleTo("TestProject")]

where TestProject is the name of the project containing your unit tests.

When I have access to the source code, marking Internal methods visible is my preferred method of testing private methods. When I don't have access to source code, I tend only to test public methods.

Tuesday, February 09, 2010 10:54:29 AM (Eastern Standard Time, UTC-05:00)
 Thursday, December 10, 2009

When writing .Net code (or code in any language for that matter) that updates a database, you need to be cognizant of the fact that it takes a finite amount of time to connect to a database and process any commands sent to the database.

ADO.Net permits you to set a TimeOut value on a Connection object and on a Command object.

The Command TimeOut property allows you to configure how long a command waits to successfully complete execution of a query. By default, a Command object will timeout after 30 seconds

It’s important to strike a good balance when setting timeout values.

Sometimes we expect a database action to take a long time and we want to give it time to complete before we pull the rug out, so to speak.

On the other hand, if a problem prevents a command from executing properly, it's useful to know this sooner so our application can handle it.

Changing a command timeout is simple. The Command object exposes a read/write ConnectionTimeout property. Set it to the number of seconds you wish the comand to wait on executing before aborting.

After the Command TimeOut period, if the command has not completed, an exception is thrown. However, the database server does not know this, so the command will continue to execute on the server - your application just won't know the results.

The Connection TimeOut is the amount of time the Connection will spend attempting to connect to a database before giving up and throwing an exception. The default Connection Timeout value is 15 seconds. On a slow network, it may take longer to connect, so you may wish to increase this value. However, if the application is unable to connect to the database - if the server is unavailable, for example - it's best to find this out sooner rather than later.

Changing the Connection Timeout is less obvious than changing the Command Timeout. The Connection class exposes a ConnectionTimeout property; But this property is read-only, so you cannot use it to change the timeout. To change a timeout, you must modify the connection string. Add or update the following to your connection string:
    Connection Timeout=XXX
where XXX is the number of seconds to wait for a connection to remain open before aborting all pending operations on that connection.

In your applications, it is important to strike the right balance when setting timeout properties.

Thursday, December 10, 2009 10:25:03 AM (Eastern Standard Time, UTC-05:00)
 Monday, November 02, 2009

Episode 60

Stephen Toub, lead Program Manager on the Microsoft Parallel Computing Platform team, sat down with us to discuss the reasons why parallel computing is important, the challenges in writing code to take advantage of multiple cores, and what Microsoft is doing to make it easier for developers to write this code.


Links:

Parallel Computing Developer Center

Parallel Programming with .NET

Monday, November 02, 2009 6:52:30 AM (Eastern Standard Time, UTC-05:00)
 Saturday, October 31, 2009

Back To Basics

I remember how excited I was in the early days of .Net when I discovered how easy it was to write a Windows Service. I probably wrote a half dozen services that first year. But I hadn't written one in years and (oddly) hadn't even heard anyone talking about writing a Windows service in as long.

Perhaps the reason one hears so little about Windows Services is because the way we write them has changed so little since .Net 1.0.

A Windows Service is not dependent on a particular user being logged in. In fact, a Windows Service can run if no one is currently logged onto the machine on which it is running. This makes a Windows Service inherently more secure than a Windows Forms application or a Console application because you can run it on a server, set credentials and log out of the console.

To create a Windows Service, open Visual Studio and select File | New Project. In the Project Types tree of the New Project dialog, expand either Visual C# or Visual Basic node and select the Windows node. In the Templates area of the dialog, select Windows Service. Set the Name, Location and Solution as you would any other project and click OK to create the project.

By default, a Windows service contains one class named "Service1". The name of this class isn't really important because it isn't called externally, so I always leave this default name. If you double-click this class in the Solution Explorer, it opens in a Design view. Select View | Code to switch to a code view of the class. Notice that the class inherits from ServiceBase and that it contains overrides of two methods: OnStart and OnStop.

As you might guess, OnStart contains code that runs when a service starts and OnStop contains code that runs when a service stops. I put setup code into OnStart, such as initializing variables that my service will need. I generally put very little code in the OnStop method, but this is where cleanup code goes.

Services are designed for long-running processes and are meant to stay in memory for a long time -sometimes months or years. Most of the services I've written use a timer object to periodically wake up and perform some check and respond if that check finds something. For example, you might check the contents of a folder every 10 minutes and, if an XML file is found in that folder, move it to a new location and parse it appropriately.

For example I  recently wanted a program that would check an error log every few minutes and automatically attempt to correct any errors found there. I had already written a class to read the error log, retry each error found and remove from the log any errors that were retried successfully. So my service only needed to call this class every 5 minutes. I used a timer class to do this. A partial code listing is shown below.

protected override void OnStart(string[] args)
{
    // Every 30 seconds, a timer will do some work
    timer1.Elapsed += new ElapsedEventHandler(timer1_Elapsed);
    timer1.Interval = 30000; 
    timer1.Enabled = true;
    timer1.Start();

}

protected override void OnStop()
{
    timer1.Enabled = false;
}

private void timer1_Elapsed(object sender, EventArgs e)
{
    // Wake up and perform some action.
    // [Cpde omitted]
}

I prefer to keep the code in the service to a minimum and abstract complex logic to a separate assembly. This makes testing and debugging easier. So the omitted code in the above example would call out to another assembly to do the hard work.

Installing a Service
In order to install a service, you will need to add an Installer class. Select Project | Add New Item; then select the Installer Class template. This class also opens in the designer by default. Select View | Code to see the code. The Installer class inherits from Installer, but you don't need to override any methods.

You can set some attributes of the service to make it easier to find. The Installer class's constructor is the place to do this. Instantiate a new ServiceInstaller and ServiceProcessInstaller, set properties of these objects, and add these objects to the Installer Class to affect the Windows Service when it is installed. Common properties that I like to set are

Class Property Description of property
ServiceInstaller ServiceInstallerServiceName The name of the service. This must match the ServiceName specified in WindowsService1.
ServiceInstaller DisplayName A name displayed in the Services Manager applet.
ServiceInstaller Description A description displayed in the Services Manager applet
ServiceProcessInstaller Account A built-in account under which the service will run.
ServiceProcessInstaller Username The name of the user account under which the service will run.
ServiceProcessInstaller Password The password of the user account under which the service will run.

For the ServiceProcessInstaller, you will set either the Account property or the UserName and Password properties. Typically, I set the Account property to System.ServiceProcess.ServiceAccount.LocalSystem, so that it can be installed. This account probably won't have sufficient privileges to accomplish what my code is trying to do, so someone will need to open the Services Manager and change this to a valid account. I could hard-code the Name and Password of an account that I know has sufficient privileges, but this ties my application too tightly to a single domain or server or organization. I would rather keep it flexible enough that it can run anywhere. And besides, the account under which a service runs is really a deployment issue, so others should be making these decisions and they should be forced to think about this at deployment time.

Below is sample code for the installer class

[RunInstaller(true)]
public partial class Installer1 : Installer
{
    public Installer1()
    {
        InitializeComponent();
        ServiceInstaller si = new ServiceInstaller();
        ServiceProcessInstaller spi = new ServiceProcessInstaller();

        si.ServiceName = "DGWinSvc"; // this must match the ServiceName specified in Service1.
        si.DisplayName = "DGWinSvc"; // this will be displayed in the Services Manager.
        si.Description = "A test service that takes some action every 30 seconds";
        this.Installers.Add(si);

        spi.Account = System.ServiceProcess.ServiceAccount.LocalSystem; // run under the system account.
        spi.Password = null;
        spi.Username = null;
        this.Installers.Add(spi);
    }
}

After your code is tested and compiled, you can deploy it to a server. Copy to a location on the server the service EXE and any associated DLLs, Config files or other objects required for it to run. The server must have the .Net framework installed. If it has the framework, it should have a program called InstallUtil.exe. You can find this program in the Windows folder under Microsoft.NET\Framework in the subfolder corresponding to the .Net CLR version under which you compiled the service. On my server, I found InstallUtil.exe in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727. Open a command prompt, change to the location of InstallUtil.exe and run the following command
INSTALLUTIL <full path of Windows Service Executable>

You can later uninstall the service with the following command

INSTALLUTIL /u <full path of Windows Service Executable>

Now open the Services Manager applet (under Windows Administrative Tools) and refresh the list of services. You should see your service listed with the DisplayName you assigned in the Installer class. Right-click this service and select Properties to display the Service Properties dialog. Click the "Log On" tab, select "A particular user" and enter the name and password of the user under which this service should run. You may need to create this user. It should have permission to access all resources that the service needs to access. Click the OK button to save these changes and close the dialog.

Of course, you can also write a setup project to your solution if you want to automate the deployment process. This article does not cover that.

To start the service, right-click the service in the Services Manager and select Start.  You can use the same applet to stop the service. Right-click its name and select Stop. Alternatively, you can stop and start the service from the command line with the following commands

NET START <ServiceName>
NET STOP <ServiceName>

where <ServiceName> s the ServiceInstaller ServiceName property specified in our installer class. For our example, we would type. This is a useful if you want to script the starting and stopping of your service.

Windows Services are something that .Net got right very early and hasn't needed to change. Creating useful services is easy with the .Net framework.

You can download a simple Windows service at DGWinSvc.zip (28.05 KB).

 

Saturday, October 31, 2009 11:40:13 PM (Eastern Standard Time, UTC-05:00)
 Wednesday, October 28, 2009

Many computers today ship with multiple processors and with multi-core processors.

In order for programs to take advantage of multiple cores, application developers need to write code that runs in parallel - that is that runs simultaneously on two or more cores. The upcoming .Net 4.0 provides tools to make it easier for developers to write such code.

The Parallel Extensions library eases the pain of building multi-threaded applications. Enhancements include a set of APIs to abstract away the complexity of parallel processing; a set of thread-safe collections appropriate for use with parallel processing; and enhancements to the System.Threading namespace.

Stephen Toub of the Microsoft's Parallel Computing team is touring the Midwest this week speaking about this new technology. Friday October 30, Stephen will be at the Microsoft office in Southfield, MI at a special meeting of the Great Lakes Area .Net User Group.

You can get more information and register for this event at http://migang.org/NewsItem/09-10-16/special_user_group_meeting_oct_30_parallel_computing_with_stephen_toub.aspx

Wednesday, October 28, 2009 11:33:43 PM (Eastern Standard Time, UTC-05:00)
 Sunday, October 18, 2009

Episode 59

In this episode, Chris Woodruff discusses how to use RIA Services to separate concerns in a Silverlight application.

Sunday, October 18, 2009 9:15:45 PM (Eastern Standard Time, UTC-05:00)
 Thursday, October 08, 2009

HopeMongers is attempting to connect people together via charitable donations. The web site HopeMongers.org allows individual contributors to donate small amounts of money (they use the term "Microgiving" to describe this) to individual projects that help the poor of the world.

By doing so, they eliminate much of the bureaucracy and cost that burdens many other charitable institutions. The projects that HopeMongers supports tend to be small in size and focused on an individual community. Examples include digging a well to provide clean drinking water for a village in Haiti; construction of a building to house an orphanage in Uganda; and a computer education center in South Africa.

"All the money that's donated to HopeMongers goes to the project" said lead architect Phil Japikse.

On the web site, each project lists the amount needed to fully fund it and the amount raised so far.

Sam Henry of Microsoft is the driving force behind this site and he has traveled around the world seeking, vetting and overseeing projects to show on the site.

But many others are involved in the development of the web site.

DiscountAsp.net donated the web hosting; SAAS hosts TFS and the build servers for free; and most of the development time was donated by dozens of talented developers. Those who didn't volunteer worked on the project at a discounted bill rate.

The development team was spread across the US and worked part-time, which presented a number of challenges. For instance, most of the collaboration took place between 10PM and 1AM Eastern time, via LiveMeeting. For those interested in the technology, the site is built using ASP.Net web forms built with JQuery, C# and NHibernate.

The site is now live and accepting donations. Visit http://www.HopeMongers.org  to learn about the projects and to give a little. You can even donate to the HopeMongers project itself from the site.

I gave $10 to help provide clean drinking water to a village in Uganda and I feel better for having done so.

Thursday, October 08, 2009 6:17:30 AM (Eastern Standard Time, UTC-05:00)
 Monday, October 05, 2009

Episode 57

In this interview, Dr. David Truxall discusses the art of debugging and dives into WinDbg and other tools to debug production issues.

Monday, October 05, 2009 7:01:31 AM (Eastern Standard Time, UTC-05:00)
 Saturday, September 26, 2009

We have been hosting Grok Talks at Sogeti since my arrival. Recently we decided to make them available via LiveMeeting and record the presentation. Here is a Grok Talk from September 23 2009.

In this presentation, Sogeti Principal Consultant Dr. David Truxall discusses the challenges of debugging and how to use WinDbg to debug production issues.

.Net | Grok Talk | Sogeti | Video
Saturday, September 26, 2009 10:09:26 AM (Eastern Standard Time, UTC-05:00)
 Monday, September 14, 2009

Episode 50

In this interview, Nathan Blevins describes how to program Microsoft Robotics Studio to control robots via programs written in Visual Studio.

Monday, September 14, 2009 7:29:21 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, September 08, 2009

Episode 48

In this interview, Phil Japikse discusses his involvement with Hopemongers.org, a charity site focused on "micro-giving", allowing donors to give a small amount of money, directly to a charitable project.

Tuesday, September 08, 2009 12:11:52 AM (Eastern Standard Time, UTC-05:00)
 Friday, September 04, 2009

Back To Basics

Extensions methods are a new feature of C# 3.0 and they are easier to use than they first appear.

An extension method is a method that is external to an existing class but appears as if it were a method on that class.

The rules for creating an extension method are simple.

  1. Create a static method
  2. The first parameter of the static method should be the type of the class you wish to extend
  3. Precede the parameter type of this first parameter with the "this" keyword.
  4. Call the method as if it were a method of the class. Omit the first parameter.

An example should clarify this. Assume we have a class Customer with properties FirstName and LastName as shown below

    public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

We can create a new static class MyExtensions with a static method GetFullName that returns the formatted first and last name of the customer. We do so with the following code

   public static class MyExtensions
    {
        public static string GetFullName(this Customer cust)
        {
            string custName = cust.FirstName + " " + cust.LastName;
            return custName.Trim();
        }
    }

Notice the parameter with the "this" keyword. That parameter format tells the compiler that this is an extension method and that it should extend the Customer class. As long as MyExtensions is in the same namespace or in a namespace available to our code (via the "using" statement), we can call this new extension method with the following code

Customer cust 
    = new Customer 
         { FirstName = "David", 
           LastName = "Giard" 
         };
string fName = cust.GetFullName();
Console.WriteLine(fName);

The code above outputs:

   David Giard

As you can see in the above code, it looks as if the GetFullName method is part of the Customer class.

We can add parameters to our extension methods as we would to any other method. The first parameter (with the “this” keyword) is always used to specify the class we are extending. All other parameters act just like normal parameters. The following extension method accepts a parameter “salutation”.

public static string GetGreeting(this Customer cust, string salutation)
{
    string custName = cust.FirstName + " " + cust.LastName;
    custName = custName.Trim();
    return salutation + “ “ + custName + ":"; 
}

Although the extension method has two parameters, we only need to pass the second parameter when calling it, as shown

Customer cust = new Customer { FirstName = "David", LastName = "Giard" };
string greeting = cust.GetGreeting("Dear");
Console.WriteLine(greeting);

The code above outputs:

   Dear David Giard:

In our examples, we were adding extension methods to a class that we just created. Of course, in this case, it would have been simpler to just modify the original class.  But extension methods are more useful if you are working with someone else’s class and modifying the source code is not an option. Extension methods often offer a simpler solution than inheriting from an existing class.

The real power of extension methods comes from the fact that you can even add methods to sealed classes. It is difficult to add functionality to a sealed class because we cannot inherit from it. Change the Customer class to sealed and re-run the code to prove that it still works.

public sealed class Customer

Here is the all code in the above sample

using System;

namespace TestExtensionMethods
{
    class Program
    {
        static void Main(string[] args)
        {
            Customer cust = new Customer { FirstName = "David", LastName = "Giard" };

            string fn = cust.GetFullName();
            Console.WriteLine(fn);

            string greeting = cust.GetGreeting("Dear");
            Console.WriteLine(greeting);

            Console.ReadLine();

        }
    }


    public sealed class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }


    public static class MyExtensions
    {
        public static string GetFullName(this Customer cust)
        {
            string n = cust.FirstName + " " + cust.LastName;
            return n.Trim();
        }

        public static string GetGreeting(this Customer cust, string salutation)
        {
            string custName = cust.FirstName + " " + cust.LastName;
            custName = custName.Trim();
            return salutation + " " + custName + ":"; 
        }
    }

}

You can download the sample code at TestExtensionMethods.zip (24.26 KB)

 

Friday, September 04, 2009 8:52:43 PM (Eastern Standard Time, UTC-05:00)
 Thursday, September 03, 2009

Recently, I was asked to automate the process of checking a set of known URLs and determining if each URL corresponded to a “live” site. For our purposes, a site is live if I can PING it and get a reply back.

I can open a command prompt and use the PING command and read the response to determine if a site is live. A live site would return a series of messages starting with “Reply from”, while a non-existent site would report an error.

Unfortunately it is difficult to automate this task from the command prompt. Fortunately, the .Net framework provides the tools to allow me to ping a URL with just a few lines of code. The functionality I need is in the System.Net.NetworkInformation namespace.

I have created a public class  PingUtils and added the statement

using System.Net.NetworkInformation;

at the top of this class.

Next, I added the following method to attempt to ping a URL and return true, if successful.

public bool UrlIsLive(string url, int timeOut)
{
    bool pingSuccess = false;
    Ping ping = new Ping();
    string pingData = "TEST";
    byte[] pingDataBytes = Encoding.ASCII.GetBytes(pingData);
    try
    {
        PingReply reply = ping.Send(url, timeOut, pingDataBytes);
        if (reply.Status == IPStatus.Success)
        {
            pingSuccess = true;
        }
    }
    catch(PingException)
    {
        pingSuccess = false;    
    }
    return pingSuccess;
}

That’s it. If an error occurs when I try to ping, it is most likely a PingException, which is equivalent to the "Ping request could not find host" error reported at the command prompt.

This function returns true for a URL that exists and is live; and false for one that does not exist.

The following unit tests should deomonstrate this

/// <summary>
///A positive test for IsLive
///</summary>
[TestMethod()]
public void IsLive_PingGoodUrl_ShouldReturnTrue()
{
    PingUtils pu = new PingUtils();
    string url = @"DavidGiard.com";
    int timeOut = 1000;
    bool siteIsLive = pu.UrlIsLive(url, timeOut);
    Assert.IsTrue(siteIsLive, "PingUtils.IsLive did not return true as expected");
}

/// <summary>
///A negative test for IsLive
///</summary>
[TestMethod()]
public void IsLive_PingBadUrl_ShouldReturnFalse()
{
    PingUtils pu = new PingUtils();
    string url = @"notDavidGiard.com";
    int timeOut = 1000;
    bool siteIsLive = pu.UrlIsLive(url, timeOut);
    Assert.IsFalse (siteIsLive, "PingUtils.IsLive did not return false as expected");
}

It’s worth pointing out a couple limitations of this function.

  • Some site’s reject all PING request as a way to protect themselves against Denial of Service attacks. For example, if you PING Microsoft.com, it will not Reply, even though the site does exist.
  • As with any program that uses networking, the internal firewall rules where the program runs may affect the success of the program.
  • The PING command checks for valid URLs, even if the URL returns an error page. So, foo.DavidGiard will reply to a PING request because my hosting provider redirects this to an error page.

Even given those limitations, this can be a very useful function for testing if all the Links stored in your database are still relevant.

You can download the code here.

Thursday, September 03, 2009 10:23:51 AM (Eastern Standard Time, UTC-05:00)
 Saturday, May 30, 2009
 #
 

Microsoft Managed Extensibility Framework (MEF) is a framework for building extensible applications. Using MEF, you can build extensible applications constructed of loosely-coupled composable parts.  By constructing an application of parts, any part can be replaced at runtime, without recompiling or redeploying the entire application. 

One use would be to create an extensible application with a plug-in architecture, allowing users to extend it or to replace parts of it, without recompiling.  As such, you would not need to release the source code along with your application. 

Microsoft already has several technologies to accomplish similar things.  Visual Studio 2008 and Microsoft Office 2007 each has a plug-in framework that allows users to extend the application.  MEF promises a single extensibility framework that can be used across all Microsoft applications.  This frees developers from the need to learn a different framework to extend each application.  In fact, the editor in the upcoming Visual Studio 2010 (now in beta) is built on top of MEF, so that developers can use MEF to add plug-ins to the IDE.

Of course, there are simpler technologies built into the .Net framework that allow you to extend applications at runtime. 

In the current version of .Net, we can code to interfaces, instead of concrete classes.  Doing so gives us the ability to defer to runtime which class to instantiate.  Our code is flexible enough to accept any class, as long as that class implements the expected interface.  However, we must decide at compile time all possible classes that might be instantiated at runtime.  This is because, in most cases, we cannot instantiate a class without setting a reference to the assembly in which that class resides.  And setting references is something done prior to compiling.  Using MEF, we can instantiate classes even if there is no explicit reference set.  MEF takes care of that for us.

Managed Extensibility Framework promises to solve the problem of building loosely-coupled, extensible applications without forcing developers to learn a new skill set for each application.  It does so without the disadvantages of forcing a recompile and loading classes into memory unnecessarily when an application is extended at runtime.

Note: As of this writing, MEF is in Community Technology 5 and is planned to be released as part of .Net 4.0

.Net | MEF
Saturday, May 30, 2009 12:35:37 PM (Eastern Standard Time, UTC-05:00)
 Tuesday, May 05, 2009

Microsoft SkyDrive is an online file storage and sharing service provided by Microsoft.  You may store up to 25GB in your SkyDrive folders.  Using SkyDrive, you can copy files to a location in "the cloud" and share them with others.  “The cloud” refers to some unknown yet accessible location not on your local computer.  You can share each folder and assign permissions on folder you create to a single user, a group of user or to all users, allowing them to Read, Write or Delete files in that folder.

In order to use SkyDrive, first sign up for the Live Mesh program.  You can do so at https://www.mesh.com/welcome/default.aspx.  Associate your Windows Live ID with the Mesh account and you will be required to enter your Windows Live e-mail and password.  If you don't currently have a Windows ID, there is a link on this page where you can create one.

Once signed up, you can access a SkyDrive account from several locations. 

Your SkyDrive page looks like the one below. 

By default, there are 2 folders: Documents and Public

Only you have access to the Documents folder, making it ideal for backing up files or making them available when you use a different computer.  Because a user must supply a username and password to view this folder, files stored here are protected from prying eyes.

Files in the Public folder can be viewed by anyone.  Copy files here that you want to share with the world.  Not only does this free you from the bother of e-mailing files to numerous recipients, it is a good way to get around the size limitations imposed by most e-mail systems.  Everyone is able to view (but not add to or update) all the files in this folder.

The permissions on the Public folder cannot be changed. 

If you require more granular sharing permissions, you should create a new folder.  To do so, click the ‘Create Folder’ link. 

On the Create Folder page, enter a name for the folder and select with whom you want to share the files in this folder.

The “Share with” dropdown allows you to specify users or groups of users with permission to view, delete or modify the files in this folder.  Only you can modify or delete the folder itself.  You cannot grant that permission to anyone else. The groups and users you specify must exist as contacts in your Windows Live account. Once you select users or groups with whom to share, you can specify one of the following two permission sets
• Can view files
• Can add, edit details, and delete files

After setting sharing permissions on a folder, you may go back later to alter those permissions.

Once the folder is created, you have the opportunity to add files to it by either dragging files from Windows Explorer or by clicking the Select files from your computer link.

To share the files in a folder, give them the file’s URL.  If you want embed a link to the file in a web page, the Embed option generates HTML to provide an HTML icon, link and description.  For example, the icon below is a link to PowerPoint slides covering SkyDrive and other Live services.

In this article, we showed how to share files using Microsoft SkyDrive.

 

Tuesday, May 05, 2009 10:43:44 PM (Eastern Standard Time, UTC-05:00)
 Tuesday, April 21, 2009

The 2009 Central Ohio Day of .Net is now history. 

Josh and Jennifer

I'm happy with the feedback I received on my Velocity talk.  The room was overflowing and several people approached me afterward to tell me they liked it.

By far, the best part of this conference was the opportunity to share ideas and interact one-on-one with bright people in the developer community.

One of the best jobs I ever had was working with the great people at GA Sullivan in Cincinnati.  That company no longer exists but many former employees were in Wilmington for this conference.  It was great catching up with these folks after all these years.

GA Sullivan alumni

The slides for my talk are below:


You can view photos of the event at
http://www.flickr.com/photos/29942169@N08/sets/72157617123586782/show/

Tuesday, April 21, 2009 6:46:29 AM (Eastern Standard Time, UTC-05:00)
 Friday, April 17, 2009

Tomorrow (Saturday April 18), I will be speaking at the Central Ohio Day of .Net in Wilmington, OH.

My topic is Using Microsoft Distributed Cache to speed your application.  This is similar to a talk I gave last summer at three user groups in Ohio and Michigan.  However, the topic is more relevant now as the release of Microsoft Velocity nears.  I have updated and expanded my presentation and written all new demos for this talk. 

A consistent caching strategy becomes critical as enterprise applications grow in size.  With Velocity, Microsoft finally has a product in the enterprise caching space.

You can get more information and register by clicking the badge below.  I hope to see you there. 

Friday, April 17, 2009 6:41:03 AM (Eastern Standard Time, UTC-05:00)
 Thursday, March 26, 2009

Episode 16

Microsoft Technology Specialist Randy Pagels describes the benefits of Microsoft Visual Studio Team System.  You can learn more about VSTS from Randy at http://www.teamsystemcafe.net/

4 mins, 18 secs

Thursday, March 26, 2009 5:32:10 AM (Eastern Standard Time, UTC-05:00)
 Monday, March 23, 2009

This screencast describes the basic concepts of caching and the upcoming Microsoft Distributed Cache, which is code named "Velocity"

Monday, March 23, 2009 6:13:55 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, March 03, 2009

Episode 11

After writing a distributed application, software architect Phil Japikse needed a way to deploy updates to users across the state.  In this conversation, Phil describes the deployment strategy he implemented using tools provided by the .Net framework. 

Tuesday, March 03, 2009 6:55:01 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, September 24, 2008

Saturday October 18 is the next ann arbor Day of .Net.

I'll be delivering a presentation on Microsoft Managed Extensibility Framework.  It should be quite different from the talk I gave last week on this subject because the API recently changed (which means I have some work ahead of me).

This makes the fifth Day of .Net I've attended and the second one at which I've presented.

The other speakers make up an impressive list so I'm excited to be part of this event. 

This event is free but typically fills up so you will need to register in advance if you plan to attend.

Click the image below to get more information and to register.

Day of .Net October 18, 2008 - Be there!

Wednesday, September 24, 2008 8:40:28 PM (Eastern Standard Time, UTC-05:00)
 Monday, September 22, 2008

As promised, here are the slides for the presentations I delivered last week in Toledo, Southfield and East Lansing

Microsoft Distributed Cache (aka "Velocity")

Microsoft Managed Extensibility Framework

 

Monday, September 22, 2008 8:26:18 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, September 16, 2008

I will be speaking at three different user groups this week.  If you are in the area, please come out and listen and say 'Hi' afterward.

I will be delivering two presentations each night:

Extending your Application with the Managed Extensibility Framework

Microsoft Managed Extensibility (MEF) framework allows developers to add “hooks” into their application to make it extensible at runtime.  These hooks allow you or a third party to extend your application dynamically in the future.  In this session, we will review the MEF tool set and build an extensible application and then extend that application using MEF.

Using Microsoft Distributed Cache to speed your application

Retrieving data from a disc or a database can be a time-consuming operation.  Data that is accessed frequently can be stored in an in-memory cache, which can speed up its retrieval considerably.  Microsoft Distributed Cache (aka “Velocity”) provides a framework for storing and managing cached data.  In this session, we will discuss how to use this framework in your application and demonstrate some code that implements this framework.

Tuesday, September 16, 2008 11:39:13 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, September 10, 2008

I will be delivering two presentations next Tuesday September 16 at the next Northwest Ohio .Net User Group beginning at 6PM.  The topics are:

Extending your Application with the Managed Extensibility Framework

Microsoft Managed Extensibility (MEF) framework allows developers to add “hooks” into their application to make it extensible at runtime.  These hooks allow you or a third party to extend your application dynamically in the future.  In this session, we will review the MEF tool set and build an extensible application and then extend that application using MEF.

Using Microsoft Distributed Cache to speed your application

Retrieving data from a disc or a database can be a time-consuming operation.  Data that is accessed frequently can be stored in an in-memory cache, which can speed up its retrieval considerably.  Microsoft Distributed Cache (aka “Velocity”) provides a framework for storing and managing cached data.  In this session, we will discuss how to use this framework in your application and demonstrate some code that implements this framework.

HCR Manorcare building at 333 North Summit St. in Toledo.  Click here to view a map.

I'm looking forward to my first visit to this user group in at least five years.

You can read more at http://www.nwnug.com/PermaLink,guid,1877615d-a53b-4b05-b6f6-5d650208af6f.aspx

Wednesday, September 10, 2008 9:47:20 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, August 27, 2008

Microsoft recently released the ASP.Net Model View Controller framework (MVC).  It is currently available as Preview 3 and can be downloaded at http://www.microsoft.com/downloads/details.aspx?FamilyId=92F2A8F0-9243-4697-8F9A-FCF6BC9F66AB&displaylang=en

A new MVC project contains a couple sample views and controllers so you can get an idea of the proper syntax to use. 

This article builds on the application created in my last ASP.Net MVC tutorial.  If you have not already done so, please follow the brief states in the previous MVC tutorial before beginning this tutorial

In the last tutorial, we added a model, view and controller to display a list of customers that one can navigate to using a URL formatted as controller/action.  In this article, we will add a new view and controller to an existing MVC project and display details of a single customer using a URL formatted as controller/action/id.  The id is passed automatically to the action method and allows us to filter to a single customer.

1.       Open Visual Studio 2008 and open the TestMVC solution created in MVC Tutorial 2.

2.       Open the Solution Explorer (View | Solution Explorer) and select the Controllers\CustomerController.cs.   Double-click CustomerController to open it in the code editor.

a.       In the CustomerController class, we will create a new action to get the details of a single customer.   

                                                               i.      Add the following private GetCustomer method to the CustomerController class.  In the last tutorial, we wrote methods to retrieve customer 1 and customer 2.  For simplicity, we will get Customer 1 if ID 1 is passed in to our method and Customer 2 if any other ID is passed.

        private Customer GetCustomer(int custID)

        {

            if (custID == 1)

            {

                return GetCustomer1();

            }

            else

            {

                return GetCustomer2();

            }

        }

                                                             ii.      Add an Action method to the CustomerController class to get the details of a customer.  Paste the following code into CustomerController.cs.

        public ActionResult Details(int id)

        {

            Customer cust = GetCustomer(id);

            return View("Details", cust);

        }

                                                            iii.      In the GetCustomer method, we get details of a single customer and return a view.  Unlike the generated code, we explicitly specify which view to return (“Details”) and we pass in some extra data (cust) that the view will consume.

3.       Add a view to the project.

a.       In the Solution Explorer, right-click the Views\Customer folder and select Add | New Item.  The Add New Item dialog displays.

    Figure 1

                                                               i.      Under Categories, select Visual C#\Web\MVC.

                                                             ii.      Under Templates, select  MVC View Content Page.

                                                            iii.      In the Name textbox, enter “Details”.

                                                           iv.      The Select a Master Page dialog displays.  

   Figure 2

1.       Navigate to the Views\Shared folder and select Site.Master.

2.       Click the OK button to add this view content page to the project.

b.      Add visual elements to the View

                                                               i.      If it is not already open, open the Details view by double-clicking List.aspx in the Solution Explorer.  Click the Source tab at the bottom of the editor.

                                                             ii.      Replace the code in Details.aspx with the following

<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Details.aspx.cs" Inherits="TestMVC.Views.Customers.Details" %>

<%@ Import Namespace="TestMVC.Views.Customers"%>

<%@ Import Namespace="TestMVC.Models" %>

<asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server">

    <div>

        CustID:

        <%=((Customer)ViewData.Model).CustID %><br />

        Name:

        <%=((Customer)ViewData.Model).FirstName %>

        <%=((Customer)ViewData.Model).LastName %>

        <br />

        Address:

        <%=((Customer)ViewData.Model).StreetAddress%><br />

        City:

        <%=((Customer)ViewData.Model).City %><br />

        State:

        <%=((Customer)ViewData.Model).State %><br />

        ZIP:

        <%=((Customer)ViewData.Model).PostalCode%><br />

    </div>

 

</asp:Content>


The above code displays details of a single Customer model object. 

4.       Modify the List view

a.       Open Views\Customers\List.aspx by double-clicking it in Solution Explorer.

b.      Find the <td>&nbsp;<\td> cell tag following the cust.PostalCode cell.  Replace this with the following code.  This generates a hyperlink displaying the text “Details”.  The URL of the hyperlink will have the current controller (“Customer”) the “Details” Action and the ID of the current customer.

<td>

<%=Html.ActionLink("Details", "Details", new { ID=cust.CustID})%>

</td>

5.       Test the application

a.       Save and compile the solution (Build | Build Solution).   Correct any errors you find.

b.  Run the solution (Debug | Start Debugging).  Depending on the port number used by Casini, it should display in your browser with a URL such as
http://localhost:4152/Home

c.       Navigate to the List page by changing the URL to
http://localhost:4152/Customers/List
(Replace the port number if necessary)

d.      You should see a list of 2 customers in your browser.  Each customer should have a hyperlink labeled “Details”. 

    Figure 3

e.      Click the hyperlink next to customer 1.  The URL should change to
http://localhost:4152/Customers/Details/1
and the details of the first customer should display on screen.

    Figure 4

f.        Set a breakpoint in the Details method of CustomerController.cs and refresh the page to step through the code as it executes.

In this article, we added a view and controller to our application and used these to retrieve and display customer details by the Customer ID. 

.Net | ASP.NET | MVC
Wednesday, August 27, 2008 7:26:29 AM (Eastern Standard Time, UTC-05:00)
 Friday, August 22, 2008

Microsoft recently released the ASP.Net Model View Controller framework (MVC).  It is currently available as Preview 3 and can be downloaded at http://www.microsoft.com/downloads/details.aspx?FamilyId=92F2A8F0-9243-4697-8F9A-FCF6BC9F66AB&displaylang=en

A new MVC project contains a couple sample views and controllers so you can get an idea of the proper syntax to use. 

In this article, we will add a new model, view and controller to an existing MVC project. 

1.       Open Visual Studio 2008 and create a new MVC project.  For information on how to create a new MVC project, see http://www.davidgiard.com/2008/08/18/TheASPNetMVCSampleAppDemystified.aspx

2.       Open the Solution Explorer (View | Solution Explorer) and select the Models folder.

3.       Add a Model to the project

a.       Right-click the Models folder and select Add | Class.  The Add New Item dialog displays.
Figure 1
    Figure 1

                                                               i.      At the Name textbox, enter “Customer”.

                                                             ii.      Click the OK button to create a Customer class.

b.      The Customer class opens in the class editor.  This class will contain a few public properties that help describe a customer object.  Add the following code to the Customer class.

    public class Customer

    {

        public int CustID { get; set; }

        public string FirstName { get; set; }

        public string LastName { get; set; }

        public string StreetAddress { get; set; }

        public string City { get; set; }

        public string State { get; set; }

        public string PostalCode { get; set; }

    }

4.       Add a Controller to the project

a.       In the Solution Explorer, right-click the Controllers folder and select Add | New Item.  The Add New Item dialog displays.
Figure 2
    Figure 2

                                                               i.      Under Categories, select Visual C#\Web\MVC.

                                                             ii.      Under Templates, select MVC Controller Class.

                                                            iii.      In the Name textbox, enter “CustomerController”.

                                                           iv.      Click the Add button to create the CustomerController class and open it in the class editor.

b.      In the CustomerController class, we will create some actions.  Each action will instantiate one or more Model objects and display them in a view object. 

                                                               i.      Add the following statement at the top of the CustomerController class. 

using TestMVC.Models;

                                                             ii.      Add the following private methods to the CustomerController class.  For now, we will create customers out of thin air (as if it were that easy).  In a real application, we would probably call a web server or query a database to get customers.

        #region private methods

        private Customer GetCustomer1()

        {

            var cust1 = new Customer

            {

                CustID = 1,

                FirstName = "David",

                LastName = "Giard",

                StreetAddress = "123 Main",

                City = "Boringville",

                State = "MI",

                PostalCode = "48108"

            };

 

            return cust1;

        }

 

        private Customer GetCustomer2()

        {

            var cust2 = new Customer

            {

                CustID = 2,

                FirstName = "John",

                LastName = "Smith",

                StreetAddress = "321 Elm",

                City = "Nowhere",

                State = "OH",

                PostalCode = "41001"

            };

 

            return cust2;

        }

 

        private List<Customer> GetAllCustomers()

        {

            Customer cust1 = GetCustomer1();

            Customer cust2 = GetCustomer2();

            List<Customer> allCusts = new List<Customer> {cust1, cust2};

            return allCusts;

        }

 

        #endregion

                                                            iii.      Add some Action method to the CustomerController class.  We’ll start with a List action.  Paste the following code into CustomerController.cs.

        public ActionResult List()

        {

            var allCustomers = GetAllCustomers();

            return View("List", allCustomers);

        }

                                                           iv.      In the List, we get a list of customers (all 2 of them) and return a view.  Unlike the generated code, we explicitly specify which view to return (“List”) and we pass in some extra data (allCustomers) that the view will consume.

5.       Add a view to the project.

a.       In the Solution Explorer, right-click the Views folder and select Add | New Folder.  A new folder appears in the Solution Explorer.  Rename this folder to “Customer”.

b.      In the Solution Explorer, right-click the Customer folder and select Add | New Item.  The Add New Item dialog displays.

                                                               i.      Under Categories, select Visual C#\Web\MVC.

                                                             ii.      Under Templates, select  MVC View Content Page.

                                                            iii.      In the Name textbox, enter “List”.

                                                           iv.      The Select a Master Page dialog displays. 
Figure 3
   Figure 4

1.       Navigate to the Views\Shared folder and select Site.Master.

2.       Click the OK button to add this view content page to the project.

c.       Add visual elements to the View

                                                               i.      If it is not already open, open the List view by double-clicking List.aspx in the Solution Explorer.  Click the Source tab at the bottom of the editor.

                                                             ii.      Replace the code in List.aspx with the following

<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="List.aspx.cs" Inherits="TestMVC.Views.Customers.List" %>

<%@ Import Namespace="TestMVC.Views.Customers"%>

<%@ Import Namespace="TestMVC.Models"%>

 

<asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server">

    <h2><%= Html.Encode(ViewData["Message"]) %></h2>

    <table>

        <tr>

            <td>ID</td>

            <td>First</td>

            <td>Last</td>

            <td>Addr</td>

            <td>City</td>

            <td>State</td>

            <td>ZIP</td>

            <td>&nbsp;</td>

        </tr>

        <% foreach (Customer cust in (List<Customer>)ViewData.Model)

           { %>

            <tr>

                <td><%=cust.CustID %></td>

                <td><%=cust.FirstName%></td>

                <td><%=cust.LastName%></td>

                <td><%=cust.StreetAddress%></td>

                <td><%=cust.City%></td>

                <td><%=cust.State%></td>

                <td><%=cust.PostalCode%></td>

                <td>&nbsp;</td>

            </tr>

        <% } %>

    </table>

</asp:Content>


The above code displays a list of Customer model objects.  In a real-world example, we may choose to have the Customer model derive from a base class and only refer to the base class in the view.  This would increase the separation between our view and our model.

6.       Test the application

a.       Save and compile the solution (Build | Build Solution).   Correct any errors you find.

b.  Run the solution (Debug | Start Debugging).  Depending on the port number used by Casini, it should display in your browser with a URL such as
http://localhost:4152/Home

c.       Navigate to the List page by changing the URL to
http://localhost:4152/Customers/List
(Replace the port number if necessary)

d.      You should see a list of 2 customers in your browser
Figure 4
    Figure 4

e.      Set a breakpoint in the List method of CustomerController.cs and refresh the page to step through the code as it executes.

In this article, we created a model, view and controller from scratch and displayed them.  You can download the code for this project here.

In the next article, we will use the ID of the URL to specify a single customer.

.Net | ASP.NET | MVC
Friday, August 22, 2008 8:02:34 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, August 20, 2008

I have been plagued recently by a recurring problem with Visual Studio 2005.

When I attempt to exit Visual Studio, I receive the error "Visual Studio cannot shut down because a modal dialog is open" and I am not able to exit.  The rub is that there is no modal dialog - close every visible window in VS, close all other apps, resize, move and minimize VS and I can find no dialogs, modal or otherwise.  Some bit inside VS is set incorrectly convincing itself that a dialog has not yet been closed.

The only solution was to open Task Manager and kill Visual Studio.

I have automatic updates turned on so, if this is a bug, I expected it would have been fixed by now. 

I found a number of posts and threads about this issue so it is not uncommon but nearly every post did not list a solution.

After some digging, I discovered a hotfix for this problem that is not included in the normal Windows updates. 

You can read the details of the problem from Microsoft here http://support.microsoft.com/kb/936971 and you can download the hotfix here
https://connect.microsoft.com/VisualStudio/Downloads/DownloadDetails.aspx?DownloadID=7259

One important point.  Visual Studio must be closed before installing the hotfix so you may end up killing the process via Task Manager one last time.

Wednesday, August 20, 2008 6:51:23 PM (Eastern Standard Time, UTC-05:00)
 Monday, August 18, 2008

Microsoft recently released the ASP.Net Model View Controller framework (ASP.Net MVC).  It is currently available as Community Technology Preview 3 and can be downloaded at http://www.microsoft.com/downloads/details.aspx?FamilyId=92F2A8F0-9243-4697-8F9A-FCF6BC9F66AB&displaylang=en

This article describes how to create an ASP.Net MVC application and the code that is auto-generated for you.

Creating a new ASP.Net MVC project

1.       Open Visual Studio 2008.  Create a new project: Select File | New Project.  The New Project dialog displays.

    Figure 1

a.       Under Project Type, select Visual Basic\Web or Visual C#\Web, depending on your language preference.

b.      Under Templates, select ASP.Net MVC Web Application.  This application was added when you installed the ASP.Net MVC preview.

c.       Provide an appropriate location and name for the project and solution.

d.      Click the OK button to create the project.

2.       One of the advantages of an ASP.Net MVC project is that the separation of most of the code from the user interface makes it easier to write unit tests.  Visual Studio encourages you to create unit tests for your new project by prompting you with the Create Unit Test Project dialog every time you create an MVC project.

    Figure 2

a.       If you wish, you can decline to create a Unit Test project or you can change the default project name.  Typically I do not change any defaults on this dialog.

b.      Click the OK button to create the Unit Test project.

The Folder structure of an MVC project

When you create a new MVC project, Visual Studio, generates a couple views and controllers.  If you understand how these work, you can use them to guide how you will create more views and controllers.

The solution contains two projects: an MVC project and a unit test project.

View the projects in Solution Explorer.  Select View | Solution Explorer.  The MVC project contains several folders.


    Figure 3

1.       The Content folder contains a stylesheet Site.css for this site.

2.       The Controllers folder is where you will store all your controller classes.  By default, this folder contains a single controller class - HomeController.cs.

3.       The Models folder is where you will store any model classes for your application.

4.       The Views folder contains a subfolder for each view in your application.  By default, there are two subfolders: Home and Shared. 

a.       The Shared subfolder contains a master page for the site because it is shared by multiple web pages.  Any other UI elements shared by the site belong in this folder.

b.      The Home folder contains two pages: About.aspx and Index.aspx. 

5.       As with most web applications, the root folder of this project contains a Global.asax file and a Web.Config file, which contain setup and configuration information for the application as a whole.

The Files and Folders of an MVC project

Open Global.asax and view the code.  Notice that the Application_Start method (which fires once, at the startup of the web application) contains a call to the RegisterRoutes method. The RegisterRoutes method tells the MVC framework how to interpret a URL. 

public static void RegisterRoutes(RouteCollection routes)

{

    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

 

    routes.MapRoute(

        "Default",                                              // Route name

        "{controller}/{action}/{id}",                           // URL with parameters

        new { controller = "Home", action = "Index", id = "" }  // Parameter defaults

    );

 

}

The routes.MapRoute method accomplishes this.  In this case, a “Default” route collection is created that interprets a URL with syntax like “{controller}/{action}/{id}”. 

·         The first part of the URL specifies the controller to use.  MVC looks in the Controllers folder for a class that inherits from System.Web.Mvc.Controller with a name that matches the controller specified in the URL.

·         The second part of the URL specifies the action to take.  The action is the public method within this controller that will be called. 

·         The third part of the URL specifies an id to pass to the action method.  This can be used to further customize the action.  For example, we could use the id as a filter to dynamically look up a single row in a database.

The routes.MapRoute method also allows us to specify defaults if no controller or action or id is specified in the URL.  If any of these are omitted from the URL, MVC will use the defaults specified in the third parameter of routes.Maproute.  In this case the object new { controller = "Home", action = "Index", id = "" } tells MVC the following:

·         If no Controller is specified in the URL, assume the Home controller (i.e., look for a class named “Home.cs” in the Controllers folder).

·         If no Action is specified in the URL, assume the Index action (i.e., look for a public method “Index” in the Home.cs class).

·         If no ID is specified in the URL, assume a blank ID (i.e., any code looking for an ID will retrieve an empty string).

Open the view files: Home.aspx and About.aspx and notice that there is no code behind in either.  This is because ASP.Net MVC applications do not execute the page life cycle.  All the code for this application is in the controllers.  These view pages contain only visual elements.

Open the controller class: Homecontroller.cs.  As we mentioned before, this class derives from the System.Web.Mvc.Controller class and it contains two methods: Index and About.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Web;

using System.Web.Mvc;

 

namespace TestMVC.Controllers

{

    public class HomeController : Controller

    {

        public ActionResult Index()

        {

            ViewData["Title"] = "Home Page";

            ViewData["Message"] = "Welcome to ASP.NET MVC!";

 

            return View();

        }

 

        public ActionResult About()

        {

            ViewData["Title"] = "About Page";

 

            return View();

        }

    }

}

“HomeController” is the name of the class to implement the Home controller.  This is typical of how MVC works – developers follow naming conventions in order tell the framework where to find the code to run.  In the case of controllers, we implement a controller by sub-classing the System.Web.Mvc.Controller class, naming this subclass “xxxController” (where xxx is the Controller name) and placing that subclass in the Controllers folder of our MVC project.  If we wanted to call a controller named “David”, we would create a System.Web.Mvc.Controller class named “DavidController” into the Controllers folder.  This process is known as “convention over configuration”, meaning that the framework knows where to find code based on the names we use.

Let’s look closely at the Index method.  Recall that the method in the controller is the Action that is specified in the URL.  So the Index method will be called if the Index action is specified.

ViewData is a dictionary collection that is a property of every Controller object.  We can add or update items in this collection by syntax such as
ViewData["Title"] = "Home Page";

By placing items in this collection, we make them available to the view when it is called.

The view (remember this is the UI that the user will see) is returned from this method.  The following line returns the default view.
return View();

We know it is the default view because the statement did not specify the name of the view.  The default view has the same name as the Action called.  In this case, we are returning the Index view.  Once again, MVC uses conventions to determine where to find the view.  All views associated with a given controller are stored in a subfolder named for that controller beneath the Views folder.  In this case, we are using the Home controller, so we look for views in the Views\Home folder of the project.  The view itself is a file with the same name as the view and with an extension of “.aspx” or “.ascx”.  In this case, we are looking for the default view (Index) of the Home controller.  MVC renders the page Views\Home\Index.aspx for this view.  Again, the developer uses naming conventions to tell the framework where to find items.

Open Index.aspx.  Notice it displays the message stored in the ViewData dictionary by the controller.
<%
= Html.Encode(ViewData["Message"]) %>

However, it contains no other code, because all logic is handled by the controller.

Conclusion

Creating a new ASP.Net MVC project is as easy as creating any other Visual Studio project.  Learning the paradigm that the MVC framework uses can be a challenge; but the samples created automatically with a new project can ease that learning curve.

In the next article, we will add a new controller and view to a project.

.Net | ASP.NET | MVC
Monday, August 18, 2008 10:37:58 AM (Eastern Standard Time, UTC-05:00)
 Sunday, August 17, 2008

The Model-View-Controller (MVC) design pattern has existed for years.  (http://heim.ifi.uio.no/~trygver/themes/mvc/mvc-index.html)  ASP.Net developers have been implementing it for years either through their own custom code or via third-party frameworks such as Monorail (http://www.castleproject.org/monorail/index.html).

Recently, Microsoft released the ASP.Net MVC framework to give web developers the option of using this design pattern without the need for a lot of “plumbing” code or the use of a third-party framework.

The Model-View-Controller design pattern splits an application into three distinct parts called (you guessed it) “Models”, “Views” and “Controllers”. 

A Model represents the stateful data in an application.  These are often represented as objects such as “Customer” and “Employee” that represent an abstract business object.  For persistent data, the Model may save and retrieve data to and from a database.  Public properties of these objects (for example, “LastName” or “HireDate”) represent their state at any give time.  The model objects have no visual representation and no knowledge of how they will be displayed on screen. 

A View is the application’s user interface (UI).  In a web application, this is the web page that the user sees and clicks and interacts with directly.  The View can display data but it has no knowledge of where the data it displays comes from.

A Controller is the brains of your application.  It links the Model to the View.  It handles communication between the other two parts of the application.  It is smart enough to detect when a data needs to be retrieved (from the model) and refreshed (in the view).  It sends updated data from the view back to the model so that the model can persist it. 

Below is Trygve M. H. Reenskaug’s diagram of the relationship between these thee parts.

    Figure 1

This separation of the various concerns of the application encourages developers to create loosely-coupled components.

Much of the communication to the controller occurs by raising events in the view, which keeps the controller loosely coupled from the other parts.  However the real advantage of the MVC pattern is that, because they only communicate through the controller, the model and view are very loosely coupled.  This provides the following advantages to an MVC application.

  • Loosely-coupled applications have fewer dependencies, making it easier to switch the user interface or backend at a later time.
  • Applications can be tested more easily, because so little code is in the view.  We can test our code by writing unit tests against the controller.

In the next article, we will look at how to create a Microsoft ASP.Net MVC application.

.Net | ASP.NET | MVC
Sunday, August 17, 2008 8:03:21 AM (Eastern Standard Time, UTC-05:00)
 Saturday, August 16, 2008
 Friday, August 15, 2008

Microsoft Visual Studio Team System 2008 Database Edition (aka “Data Dude”) provides tools for managing and deploying SQL Server databases. 

In this article, we will discuss how to migrate data from one database to another.   Data Dude provides the Data Compare tool for this purpose.

In order to use the Data Compare tool, the following conditions must be true

1.       Data exists in a source table.  You want to migrate that data to a table of the same name in a different database.

2.       Both tables must have the same structure.

3.       Both tables must have a primary key to uniquely identify rows.

Follow the steps below to migrate data with the Data Compare tool.

1.       Launch Visual Studio 2008

2.       Select Data | Data Compare | New Data Compare.  The New Data Comparison dialog displays

  Figure 1

a.       When migrating data from a table in one database to another, the database you intend to update is known as the “Target Database”.  The other database is known as the “Source Database”.  In the New Data Comparison dialog, select the Source Database and Target Database connections.  If you have not created a Visual Studio connection to these databases in Visual Studio, you can click the New Connection button to create them.

b.      The New Data Comparison dialog contains checkboxes that allow you to specify which rows you want to see and compare.  I don’t usually change these (they are all checked by default) but it may speed up the process to clear the Identical Records checkbox.

c.       Click the Next button to advance to the second screen of the wizard.

    Figure 2
On this screen, you can choose which tables to compare.  Usually I am only interested in one or two tables, so I clear the rest of the checkboxes.  If I have a million rows in my customer table and I’m not interested in migrating any of those rows, I can save a lot of processing time by un-checking the customer table.

d.      Click the Finish button to display the Data Compare window.

3.       The Data Compare window consists of two panes: the Object List on top and the Record Details at the bottom.

    Figure 3

a.       The object list displays each table or view as a single row with columns summarizing the number of new, removed, changed and identical rows.  Rows are matched on their primary key.

b.      Click a row in the object list to display details in the record details pane.  Here you can click a tab to view the rows that are new, missing or changed.  Checking the checkbox next to a record flags it to the Data Compare tool, meaning you want to update the target database to match the same row in the source database.  This may result in an INSERT, UPDATE, or DELETE statement depending on the tab on which the record is listed. 

c.       For a record to be flagged for update, both the table and the record must be checked.

4.       After checking all the rows you wish to update, click the Write Updates toolbar button to commit your changes to the target database. 

5.       Alternatively, you can click the Export To Editor toolbar button to generate a SQL script that you can run in the SQL Server query editor.  This method requires an extra step but has the following advantages

a.       You can modify the script before running it.

b.      You can send the script to someone else to run.

c.       You can view the script to learn what Data Dude is doing.  It’s interesting to note that constraints on each table are dropped before copying data, then created after the data is copied.  This speeds up the process.  Also, note the use of transactions to prevent incomplete data copies.  Below is a sample script updating data in one table.
/*
This script was created by Visual Studio on 8/15/2008 at 8:57 AM.
Run this script on dgiard.Test_QA.dbo to make it the same as dgiard.Test_Dev.dbo.
This script performs its actions in the following order:
1. Disable foreign-key constraints.
2. Perform DELETE commands.
3. Perform UPDATE commands.
4. Perform INSERT commands.
5. Re-enable foreign-key constraints.
Please back up your target database before running this script.
*/

SET NUMERIC_ROUNDABORT OFF
GO
SET XACT_ABORT, ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT, QUOTED_IDENTIFIER, ANSI_NULLS ON
GO
/*Pointer used for text / image updates. This might not be needed, but is declared here just in case*/
DECLARE @pv binary(16)
BEGIN TRANSACTION
ALTER TABLE [dbo].[OrderDetails] DROP CONSTRAINT [FK_OrderDetails_Orders]
ALTER TABLE [dbo].[OrderDetails] DROP CONSTRAINT [FK_OrderDetails_Products]
ALTER TABLE [dbo].[Orders] DROP CONSTRAINT [FK_Orders_Customers]
DELETE FROM [dbo].[ProductTypes] WHERE [ProductTypeID]=N'5646953f-7b89-4862-bcf3-bf53450d28bb'
INSERT INTO [dbo].[ProductTypes] ([ProductTypeID], [ProductTypeName]) VALUES (N'7beb0d99-d034-41b9-bbf7-f9cdcdbedc30', N'Furniture')
INSERT INTO [dbo].[ProductTypes] ([ProductTypeID], [ProductTypeName]) VALUES (N'abc19a14-5968-4c5f-9f0f-4debc034cb90', N'Hardware')
INSERT INTO [dbo].[ProductTypes] ([ProductTypeID], [ProductTypeName]) VALUES (N'b9e446ed-eeb1-4334-b191-c70a55ef1a05', N'Books')
ALTER TABLE [dbo].[OrderDetails] ADD CONSTRAINT [FK_OrderDetails_Orders] FOREIGN KEY ([OrderID]) REFERENCES [dbo].[Orders] ([OrderID])
ALTER TABLE [dbo].[OrderDetails] ADD CONSTRAINT [FK_OrderDetails_Products] FOREIGN KEY ([ProductID]) REFERENCES [dbo].[Products] ([ProductID])
ALTER TABLE [dbo].[Orders] ADD CONSTRAINT [FK_Orders_Customers] FOREIGN KEY ([CustomerID]) REFERENCES [dbo].[Customers] ([CustID])
COMMIT TRANSACTION

The Data Compare tool is a simple tool for accomplishing a useful task.  Since I discovered it, it has saved me a lot of time setting up new data environments.

 

.Net | SQL Server | VSTS
Friday, August 15, 2008 8:02:53 AM (Eastern Standard Time, UTC-05:00)
 Thursday, August 14, 2008

Writing Unit Tests is an essential step in developing robust, maintainable code.  Unit Tests increase quality and mitigate the risk of future code changes.  However, relatively few developers take the time to write unit tests for their stored procedures.  The primary reason for this is that few tools exist to test stored procedures.

Microsoft Visual Studio Team System 2008 Database Edition (aka “Data Dude”) provides tools to help developers write unit tests against SQL Server stored procedures.  The tool integrates with MSTest, wich is a testing framework many developers are already using for their other unit tests.

In order to write unit tests for your stored procedures, those stored procedures must be in a database project.  For information on how to create a database project from a SQL Server database see: http://www.davidgiard.com/2008/08/11/DataDudeTutorial1CreatingADatabaseProject.aspx

This document describes how to create a database unit test.

1.       Launch Visual Studio and open your Database Project.

2.       Open the Schema View.  Select View | Schema View.

3.       Right-click a stored procedure and select Create Unit Test from the context menu.  The Create Unit Tests dialog displays.

    Figure 1

4.       Check the checkboxes next to all the stored procedures for which you wish to create unit tests.  Select the .Net language (Visual Basic .Net or C#) in which you want the automatic code to be generated.  You won’t be modifying this code so it isn’t that important, but I tend to keep all my code in the same language, so you may as well choose your favorite language here.  Enter a meaningful name for the Unit Test Project and class.  I like to name my Unit Test projects the same as my database project, followed by “Tests” or “UnitTests”.  If this is a new Database Unit Test Project, the Database Unit Test Configuration dialog displays.

    Figure 2

5.       The Database Unit Test Configuration dialog allows you to specify what you want to occur when you run these unit tests.  The dialog is organized into the following sections.

a.       Database connections

                                                               i.      Execute unit tests using the following data connection
This is the database against which tests will run.  Typically I set this to my Development or QA database.

                                                             ii.      Use a secondary data connection to validate unit tests
You may specify a different database to validate the syntax of your unit tests and test that all the objects you refer to exist.  I can only think this might be good if you are writing tests while disconnected from your testing database, but I never set this option.

b.      Deployment

                                                               i.      Automatically deploy the database project before unit tests are run
To save manual steps, you may wish to check this box and deploy the database project to the database each time you run your unit tests.  This slows down the testing step so I do not select this option.  I prefer to deploy my changes once; then run my unit tests – sometimes several times.

c.       Database state

                                                               i.      Generate Test data before Unit tests are run
It is often useful to populate your database with some test data prior to your test run.  Use this button to do this.

6.       After creating your unit tests, you need to modify each one and specify what you are testing.  Open the Solution Explorer (View | Solution Explorer).

7.       Double-click the unit test class to open it in the unit test designer. 

    Figure 2

8.       The Unit Test Designer contains some controls and two panes as described below. 

a.       A class can contain multiple tests.  The first control is a dropdown that allows you to select which test you are designing.

b.      To the right of the Test Name dropdown is another dropdown that allows you to specify what part of the test you are writing.  You can choose between the test itself, the “Pre-test” (which runs before the test is executed) and the “Post-test” (which runs after the test has completed – successfully or unsuccessfully).

   Figure 3

c.       Further to the right are three buttons that allow you to add a new test or to delete or rename the currently active test.

d.      Below the controls is the test editor.  This is where you will write your test.  You will write your test in T-SQL and Data Dude provides some stub code to get you started.  Write SQL statements that call your stored procedure and return one or more results.

e.      Below the test editor is the Test Conditions pane.  It is here that you enter your assertions. 

    Figure 4

                                                               i.      You can test for a given result set having 0 rows, 1 or more rows, or an exact number of rows. 

                                                             ii.      You can also test if a specific column and row in a given result set evaluates to a given value. 

                                                            iii.      Click the “+” button to add new assertions.  Highlight an existing assertion row and edit the row or click the “x” button to remove the assertion. 

                                                           iv.      Use the properties window to modify properties of the assertion.  Many assertions are based on a given resultset.  When I first started writing unit tests, I found it difficult to determine which resultset was which.  Basically, any line in your SQL script that begins with the word “SELECT” creates a resultset.  Each resultset is numbered, beginning with 1, in the order it is created in your script.  I sometimes find it useful to copy the SQL code and paste it into SQL Management Studio query window and run it.  Each resultset then appears in a separate grid in the Results pane.  Looking at these grids allows me to more easily see a sample result set and in what order they are created.

f.        You run your database unit tests the same ways you run any MS Test unit test.  One way to run the tests is to open the Test List Editor (Test | Windows | Test List Window), check the tests you want to run, and click the Run Checked Test toolbar button.  Tests in which all assertions prove true are passed; all others are failed.

Using Data Dude, you can extend your unit tests to cover your database objects and, therefore, improve the overall quality and maintainability of your code.

.Net | SQL Server | VSTS
Thursday, August 14, 2008 8:56:47 AM (Eastern Standard Time, UTC-05:00)
 Wednesday, August 13, 2008

Microsoft Visual Studio Team System 2008 Database Edition (aka “Data Dude”) provides tools for managing and deploying SQL Server databases. 

In our last tutorial, we described how a database developer would use the Schema Compare tool to update a database project with changes to a SQL Server database.

This article describes how to use the Schema Compare tool to push those changes out to a different SQL Server database.  There are two scenarios where you would do this. 

In Scenario 1, a developer has a local copy of the development database and wishes to get the latest updates to the database. 

In Scenario 2, a database administrator (DBA) or build master who is charged with migrating database changes from one environment to the next.  Just as .Net and web code gets regularly migrated from a development environment to a QA or Production environment, database object code must also be migrated, and that migration generally must be kept in sync with all code that depends on those database objects.

We start this process by launching Visual Studio and opening the database project.  If a source code repository such as TFS is used, we need to get the latest code from the repository.

The database to which we wish to write the changes is known to Data Dude as the “target database”.  We need to make sure that a connection exists in Visual Studio to the target database.  This is a one-time step and you can use the Server Explorer (View | Server Explorer) to create a connection.

The following steps describe how to propagate changes to the database.

1.       Launch Visual Studio and open the database project.  Get the latest source code from your source code repository.

2.       From the Visual Studio menu, select Data | Schema Compare | New Schema Compare.  The New Schema Compare dialog displays.

    Figure 1

3.       Under Source Schema, select the Project radio button and select your database project from the dropdown list.

4.       Under Target Schema, select the Database radio button and select the connection to your database from dropdown list.

5.       Click the OK button to display the Schema Compare window.

    Figure 2

6.       The Schema Compare window lists every object that exists in either the database or the database project.  The objects are grouped in folders by object type (Tables, views, stored procedures, etc.)  You can expand or collapse a folder to view or hide objects of that type.  The important column is “Update Action” which describes what will happen if you write the updates to the target.

a.       Objects that exist in the source (the project) but not in the target (the database) were likely recently added after the last synchronization.  By default, the Update Action will be “Create” meaning the object will be created in the target database.

b.      Objects that exist in both the source and the target will have an Update Action of “Update” if they have been modified in the database since the last synchronization or “Skip” if they have not.

c.       Objects that exist in the destination (the database) but not in the source (the project) were likely dropped after the last synchronization. By default, the Update Action will be “Drop” meaning the object will be removed from the database.

7.       On a database with many objects, it is useful to view only the objects that have changed since the last synchronization.  To do this, click the Filter toolbar button and select Non Skip Objects. 

    Figure 3

8.       If you wish, you can modify the Update Action on objects by selecting the dropdown in the “Update Action” column.  Some actions are grayed out because Data Dude will not allow you to perform any action that would violate referential integrity rules. 

9.       After you have set the “Update Action” of every object appropriately, you have a couple options.

a.       You can migrate your changes immediately to the target database by clicking the “Write Updates” toolbar button.  Click Yes at the confirmation to write the updates to the database project.

b.      Alternatively, you can export your changes to a SQL script by clicking the Export To Editor toolbar button.  This will create a single text file containing SQL script that you can run from a query window of SQL Server Management Studio.  This is useful if you need to make changes to the script prior to executing.  I have used this technique when my database contains views or stored procedures that refer to remote servers and I want to modify the name of the server before migrating the object.

Alternatively, you can deploy changes from a database project to a database by “Deploying” the project (select Build | Deploy Solution).  This deploys your changes using the settings found on the Build tab of the project properties page.  This method requires fewer steps, but it is less flexible than the method described above.  In particular, it does not allow you to select which objects are deployed or export and modify the script of database changes.

In the next article, we will discuss how to use Data Dude to write Unit Tests against SQL Server stored procedures.

.Net | SQL Server | VSTS
Wednesday, August 13, 2008 6:46:54 AM (Eastern Standard Time, UTC-05:00)
 Tuesday, August 12, 2008

Microsoft Visual Studio Team System 2008 Database Edition (aka “Data Dude”) provides tools for managing and deploying SQL Server databases. 

In our last tutorial, we described how to create a new database project based on an existing SQL Server database.  As the source database changes, you will want to update the database project to reflect those changes.  This article describes how to use the Schema Compare tool to import database schema changes into your database project.  The steps in this article are typically performed by a database administrator (DBA) or database developer who is charged with creating tables, views, functions and stored procedures.

The Schema Compare tool can be used to display and manage differences between two databases, between two database projects, or between a database and a database project.  Most of the time, I use it to compare a database with a database project. 

After making a change to a database schema (for example, adding a new table or adding a new column to a table), use the Schema Compare tool as described below to update an existing database project with these changes.

1.       Launch Visual Studio and open your Database Project. (For info on how to create a database project from a SQL Server database see: http://www.davidgiard.com/2008/08/11/DataDudeTutorial1CreatingADatabaseProject.aspx )

2.       From the Visual Studio menu, select Data | Schema Compare | New Schema Compare.  The New Schema Compare dialog displays.

    Figure 1

3.       Under Source Schema, select the Database radio button and select the connection to your database from dropdown list.

4.       Under Target Schema, select the Project radio button and select your database project from the dropdown list.

5.       Click the OK button to display the Schema Compare window.

    Figure 2

6.       The Schema Compare window lists every object that exists in either the database or the database project.  The objects are grouped in folders by object type (Tables, views, stored procedures, etc.)  You can expand or collapse a folder to view or hide objects of that type.  The important column is “Update Action” which describes what will happen if you write the updates to the target.

a.       Objects that exist in the source (the database) but not in the target (the project) were likely added to the database after the last synchronization.  By default, the Update Action will be “Create” meaning the object will be created in the database project.

b.      Objects that exist in both the source and the target will have an Update Action of “Update” if they have been modified in the database since the last synchronization or “Skip” if they have not.

c.       Objects that exist in the destination (the project) but not in the source (the database) were likely dropped from the database after the last synchronization. By default, the Update Action will be “Drop” meaning the object will be removed from the database project.

7.       If you are updating your database project frequently, most of the objects will be unchanged and marked “Skip”.  On a database with many objects, it is useful to view only the objects that have changed since the last synchronization.  To do this, click the Filter toolbar button and select Non Skip Objects 

    Figure 3

8.       At this point, you can view differences and you may wish to modify the Update Action of some objects.

a.       If you click on an object row in the Schema Compare window, the SQL definition code of both the source and destination version appears in the Object Definition window.  Any differences between the two versions will be highlighted (changed lines in darker blue; new lines in darker green).

b.      If you like, you can modify the Update Action any object by selecting the dropdown in the “Update Action” column.  Some actions are grayed out because Data Dude will not allow you to perform any action that would violate referential integrity rules.  If several developers are sharing the same development database, you may wish to skip those objects on which you are not working.  You may also decide that some objects are ready to share with the rest of the team and others are not fully tested and should be skipped.  It is possible to change the Update Action of every object of a given type by right-clicking the type folder and selecting the desired action to apply to all objects of that type.

9.       After you have set the “Update Action” of every object appropriately, you can migrate your changes to the database project by clicking the Write Updates toolbar button.  Click Yes at the confirmation to write the updates to the database project.

    Figure 4

10.   If you are using a source control repository, such as TFS, you will want to check in your changes.

In the next article, we will discuss how to use the Schema Compare tool to write changes to a new database environment.

.Net | SQL Server | VSTS
Tuesday, August 12, 2008 7:40:04 AM (Eastern Standard Time, UTC-05:00)
 Monday, August 11, 2008

Microsoft Visual Studio Team System 2008 Database Edition (aka “Data Dude”) provides tools for managing and deploying SQL Server databases.  In order to use Data Dude to manage an existing SQL Server database, the first step is to create a database project. 

There are a couple key points you will need to know before using Data  Dude.

1.       The current version of Data Dude only works on SQL Server 2000 and SQL Server 2005.  Visual Studio 2008 Service Pack 1 should provide support for SQL Server 2008.  I will describe an example using SQL Server 2005.

2.       The validation engine in Data Dude requires that you install either SQL Server or SQL Express on the same machine on which Data Dude is installed. 

3.       You must grant “Create database” rights in this database engine to the currently logged-in user.

Now, let’s discuss how to create a database project to manage an existing SQL Server 2005 database.

1.       Open Visual Studio. 

2.       Select File | New Project… The New Project dialog displays

3.       Under Project Type, select Database Projects\Microsoft SQL Server

4.       Under Templates, select SQL Server 2005 Wizard.

5.       Enter a meaningful name and location for this project. 
Typically, my databases have names like “AdventureWorks_Dev” and “AdventureWorks_QA” which describe both the data and the environment to which the data belongs.  Because a single database project is used for all environments, I name my database project to describe the data and follow it with “DB” to make it obvious it is a database project.  In the above example, I would name my database project “AdventureWorksDb”.  In this exercise, I’ll create a project named “TestDB”.
New Project dialog
  
Figure 1

6.       The New Database Project Wizard displays with the Welcome screen active.

7.       At the Welcome screen, click the Next button to advance to the Project Properties screen.
Project Properties screen
  Figure 2

8.       I almost never change the options on the Project Properties screen.   If my database contains any stored procedures or functions written in C# or VB.Net, I will check the Enable SQLCLR checkbox. 

9.       Click the Next button to advance to the Set Database Options screen.
Set Database Options screen
  Figure 3

10.   The options on the Set Database Options screen correspond to the settings you will find in SQL Server Management Studio when you right-click a database and select Properties.  The defaults in the database project wizard are also the defaults in SQL Server.  Since I seldom override these defaults in SQL Server, there is usually no reason to change them on this screen.

11.   Click the Next button to advance to the Import Database Schema screen.
Import Database Schema screen
  Figure 4

12.   On the Import Database Schema screen, check the Import Existing Schema checkbox.  This enables the Source database connection dropdown.  If you already have created a connection to your database, select it from the dropdown.   If you have not yet created a connection, click the New Connection button to create one now.  The process for creating a database connection in Visual Studio hasn’t changed for several versions of the product so I won’t repeat it here.  However it is worth noting that, although Data Dude requires that you have a local installation of SQL Server, the database you connect to here can be located on any server to which you have access.  I always usually connect to the Development database because this is the first database I create for an application.

13.   Click the Next button to advance to the Configure Build and Deploy screen.
Configure Build and Deploy screen
  Figure 5

14.   The Configure Build and Deploy screen contains settings that will take effect when you “deploy” your database project.  Deploying a database project writes changes to the schema of a target database (specified in the Target database name field of this screen) and is accomplished by selecting the menu options Build | Deploy with the database project open and selected.  Deploying is most useful when each developer has his own copy of the development database and needs a quick way to synchronize his schema with a master copy.  

15.   Click the Finish button to create the database project initialized with schema objects found in the source database and with the settings you chose in the wizard screens.

16. After Visual Studio finishes creating the database project, view the objects in the Solution Explorer (Select View | Solution Explorer).  You should see a "Schema Objects" folder in the project containing a subfolder for each type of database object.  Open the "tables" subfolder to view you will see files containing scripts for each table in your database.  Double-click one of these script files to see the SQL code generated for you.
Database project in Solution Explorer
    Figure 6

17. If you use a source control repository, such as TFS, you will want to check this project into the repository to make it easier to share with others.

As you can see, when you use the wizard to create your project, most of the work is done for you.  You are able to change the default settings, but in most cases this is not necessary.  Often, the only change I make on the wizard screens is when I select a database connection.

In the next article, we will discuss how to use the Schema Compare tool to bring data changes into or out of your database project.

.Net | SQL Server | VSTS
Monday, August 11, 2008 9:12:22 AM (Eastern Standard Time, UTC-05:00)
 Sunday, August 10, 2008

Visual Studio Team System 2008 Database Edition is a mouthful to say, so a lot of people affectionately call it “Data Dude”.

Data Dude provides a set of tools integrated into Visual Studio that assist developers in managing and deploying SQL Server database objects.

There are four tools in this product that I have found particularly useful: the Database Project; the Schema Compare tool; the Data Compare Tool; and Database Unit Tests.

A Database Project is a Visual Studio project just as a class library or ASP.Net web project is.  However, instead of holding .Net source code, a Database Project holds the source code for database objects, such as tables, views and stored procedures.  This code is typically written in SQL Data Definition Language (DDL).  Storing this code in a Database Project makes it easier to check it into a source code repository such as Team Foundation Server (TFS); and simplifies the process of migrating database objects to other environments.

The Schema Compare tool is most useful when comparing a database with a Visual Studio Database Project.  Developers can use this tool after adding, modifying or deleting objects from a database in order to propagate those changes to a Database Project.  Later, a Database Administrator (DBA) can compare the Database Project to a different database to see what objects have been added, dropped or modified since the last compare.  The DBA can then deploy those changes to the other database.  This is useful for migrating data objects from one environment to another, for example when moving code changes from a Development database to a QA or Production database.

The Data Compare is another tool for migrating from one database environment to the next.  This tool facilitates the migration of records in a given table from one database to another.  The table in both the source and destination database must have the same structure.  I use this when I want to seed values into lookup tables, such as a list of states or a list of valid customer types that are stored in database tables.

Unit tests have increased in popularity the last few years as developers have come to realize their importance in maintaining robust, error-free code.  But unit testing stored procedures is still relatively rare, even though code in stored procedures is no less important than code in .Net assemblies.  Data Dude provides the ability to write unit tests for stored procedures using the same testing framework (MS Test) you used for unit tests of .Net code.  The tests work the same as your other unit tests – you write code and assert what you expect to be true.  Each test passes only if all its assertions are true at runtime.  The only difference is that your code is written in T-SQL, instead of C# or Visual Basic.Net. 

There are some limitations.  In order to use Data Dude, you must have either SQL Server 2008 or SQL Express installed locally on your development machine and you (the logged-in user) must have "Create Database" rights on that local installation.  To my knowledge, Data Dude only works with SQL Server 2000 and 2005 databases.  Plans to integrate with SQL Server 2008 have been announced but I don't know Microsoft's plans for other database engines.   I also occasionally find myself wishing Data Dude could accomplish its tasks more easily or in a more automated fashion.  I wish, for example, I could specify that I always want to ignore database users in a database and always want to migrate everything else when using the Schema Compare tool.  But overall, the tools in this product have increased my productivity significantly.  Nearly every application I write has a database element to it and anything that can help me with database development, management and deployment improves the quality of my applications.

.Net | SQL Server | VSTS
Sunday, August 10, 2008 7:34:41 AM (Eastern Standard Time, UTC-05:00)
 Saturday, August 09, 2008

When applications service a large number of simultaneous users, the developer needs to take this into account and find ways to ease the application’s bottlenecks. 

One way to help speed up a stressed application is to load into memory resources that will be requested by multiple users.  Reading from memory is much faster than reading from a hard drive or a database, so this can significantly speed up an application. 

However, each computer contains a finite amount of memory, so there is a limit of how much data you can store there.

Microsoft Distributed Cache (code named "Velocity") attempts to address this problem.  It allows your code to store data in an in-memory cache and it allows that cache to be stored on multiple servers, thus increasing the amount of memory available for storage. 

Velocity even ships with a provider that allows you to store a web site's session state, making it possible to increase the amount of memory available to your session data.

Microsoft has not yet published a release date for Velocity, but it is available as a Community Technology Preview (CPT).  You can download these bits and read more about it at http://code.msdn.microsoft.com/velocity.

The current CTP is not production ready - I had trouble keeping the service running on my Vista machine - but the technology shows enough promise that it is worth checking out.  When the glitches are fixed, this will make .Net an even more appealing choice for developing enterprise applications.

Saturday, August 09, 2008 8:38:56 AM (Eastern Standard Time, UTC-05:00)
 Thursday, August 07, 2008

Microsoft recently released the Managed Extensibility Framework (MEF) which allows developers to add hooks into their applications so that the application can be extended at runtime.

Using MEF is a 2-step process: one step is performed by the application developer who adds attributes or code at defined points in the application.  At these points, the application searches for extensible objects and adds or call them to the application at runtime. 
The second step is by third-party developers who use the MEF application programming interface (API) to define classes in an “extension” assembly as extensible so that they will be discoverable by the above-mentioned applications.

The two steps are loosely-coupled, meaning neither the application nor the extension assembly needs to know anything about the other.  We don't even need to set a reference from one project to another in order to call across these boundaries.

I can think of two scenarios where this technology would be useful.

In scenario 1, an independent software vendor develops and sells a package with many pluggable modules.  Customers may choose to buy and install one module or all modules in the package.  For example, an Accounting package may offer General Ledger, Accounts Payable, Accounts Receivable Payroll and Reporting modules, but not all users will want to pay for every module.  By using MEF, the software could search a well-known directory for any module assemblies (flagged as extensions by MEF) and add to the menu only those that are installed.  With MEF in place, more modules could be added at a later time with no recompiling.

In scenario 2, developers create and deploy an application with a given set of functionality and create points at which other developers are allowed to extend the application using MEF.  By publishing these extendable points, they can allow developers to add functionality to the application without modifying or overriding the original source code.  This is a much safer way of extending functionality.  Extensions could be anything from new business rules or workflow to additional UI elements on forms.

All the extensions happen at runtime and MEF gives developers the ability to add metadata to better describe their extension classes.  By querying this metadata, we can conditionally load only those extensions that meet expected criteria.  The best part of this feature is that we can query a class's metadata without actually loading that class into memory.  This can be a huge resource saving over similar methods, such as Reflection.

MEF is currently released as Community Technology Preview (CPT), so the API is likely to change before its final release.  You can download the CTP and read more about it at http://code.msdn.microsoft.com/mef.   By learning it now, you can be prepared to add extensibility to your application when MEF is fully released.

Thursday, August 07, 2008 2:37:42 PM (Eastern Standard Time, UTC-05:00)
 Thursday, March 20, 2008
 #
 

The 'Heroes Happen Here' launch was well worth my time.  Not only did we get a chance to see demos of Microsoft's newest products, I also got a chance to meet up with many of the most energetic developers in the midwest.  About 50 of us headed to Greektown after the event for an after-event event.  You can view my photos of the day here.

At the end of the day, everyone walked off with (among other things) a copy of Visual Studio 2008.  I was excited because I had spent most of the day following the Developers Track, which outlined the features of the new Visual Studio.  I'll be installing it this weekend. 

I especially like the fact that I can install VS 2008 and work on applications using older .Net frameworks.  Framework versions 2.0, 3.0 and 3.5 are all available.  This mitigates the risk of upgrading and it means I don't need to install multiple versions of Visual Studio.  I can write code that targest .Net Framework 2.0 but still take advantage of the improvements in the IDE, such as client-side debugging, stylesheet troubleshooting, and built-in Ajax support.

My customer has decided to upgrade to Visual Studio 2008 but defer the decision to upgrade the framework on which we have built our applications.

Here is a list of new features in Visual Studio 2008:  http://msdotnetsupport.blogspot.com/2007/11/22-new-features-of-visual-studio-2008.html
      

Thursday, March 20, 2008 2:15:04 PM (Eastern Standard Time, UTC-05:00)
 Monday, March 17, 2008

Tomorrow is the big Heroes Happen Here event in Detroit.

Microsoft is launching new versions of Visual Studio, Windows Server and SQL Server.  I'm looking forward to seeing the new tools and how to utilize these tools in my projects. 

I believe the Detroit sessions are full but it may be possible to attend the event without attending the sessions.  It will be held from 8AM - 5PM at the Marriott in the Renaissance Center downtown.  You can get more information at http://www.microsoft.com/heroeshappenhere/default.mspx

Supposedly, Quick Solutions is sending up a busload of consultants from Columbus to the event.

 

Monday, March 17, 2008 10:21:29 AM (Eastern Standard Time, UTC-05:00)