Marquee de Sells: Chris's insight outlet via ATOM 1.0 csells on twitter

You've reached the internet home of Chris Sells, who has a long history as a contributing member of the Windows developer community. He enjoys long walks on the beach and various computer technologies.

Roslyn Syntax Visualizer Tools

As I do more with Roslyn, I find I want more information about what I’m parsing and how it’s represented in the Roslyn object model. I could, of course, have built myself a little OM dumper for Roslyn, but instead I dug through the samples and found two cool ones built right in, both provided in the Documents\Microsoft Codename Roslyn CTP - October 2011\Shared folder.

Both Roslyn visualizer samples show a set of objects from the syntax part of the Roslyn API and the associated properties for the currently selected node as well as the associated text. The difference is only where the text comes from, a syntax tree or a text file.

Syntax Debugger Visualizer

The SyntaxDebuggerVisualizer sample (fully described in the associated Readme.html) allows you to create a new Visual Studio visualizer such that you can hover over a node from the syntax tree, click on the magnifying glass in the data tip and get this:


The text comes from the syntax tree parsed in the program. As each node in the syntax tree in the visualizer is selected, the associated properties are showed below and the range of text is shown on the right.

Syntax Visualizer Extension

The SyntaxVisualizerExtension sample (also described in its own associated Readme.html) shows the syntax tree from the current C# or VB file that’s open in Visual Studio. You get to the visualizer by loading the SyntaxVisualizerExtension sample project, starting the app (under the debugger [F5] or not [Shift+F5] as you choose), which starts another copy of Visual Studio. In this instance of VS, open a C# or VB source file and choose View | Other Windows | Roslyn Syntax Visualizer, which shows the following:


This version of the visualizer works just like the other one except that it gets the text from the current source code file. As you open different files, the visualizer window updates itself. As you change the selected node in the visualizer’s syntax tree, the associated code in the file is selected.

Both of these tools are very helpful for understand what’s been parsed by Roslyn. I personally like the debugger visualizer, as it’s always available without starting up a new instance of VS, but honestly I’m happy to have either, let alone both!

Update: Plus, there’s a cool tree view, too. Check it out!


REPL for the Rosyln CTP 10/2011

I don’t know what it is, but I’ve long been fascinated with using the C# syntax as a command line execution environment. It could be that PowerShell doesn’t do it for me (I’ve seriously tried half a dozen times or more). It could be that while LINQPad comes really close, I still don’t have enough control over the parsing to really make it work for my day-to-day command line activities. Or it may be that my friend Tim Ewald has always challenged csells to sell C shells by the sea shore.

Roslyn REPL

Whatever it is, I decided to spend my holiday time futzing with the Roslyn 2011 CTP, which is a set of technologies from Microsoft that gives you an API over your C# and VB.NET code.

Why do I care? Well, there are all kinds of cool code analysis and refactoring tools I could build with it and I know some folks are doing just that. In fact, at the BUILD conference, Anders showed off a “Paste as VB” command built with Roslyn that would translate C# to VB slick as you please.

For me, however, the first thing I wanted was a C# REPL environment (Read-Evaluate-Print-Loop). Of course, Roslyn ships out of the box with a REPL tool that you can get to with the View | Other Windows | C# Interactive Window inside Visual Studio 2010. In that code, you can evaluate code like the following:

> 1+1 2
> void SayHi() { Console.WriteLine("hi"); }
> SayHi();

Just like modern dynamic languages, as you type your C# and press Enter, it’s executed immediately, even allowing you to drop things like semi-colons or even calls to WriteLine to get output (notice the first “1+1” expression). This is a wonderful environment in which to experiment with C# interactively, but just like LINQPad, it was a closed environment; the source was not provided!

The Roslyn team does provide a great number of wonderful samples (check the “Microsoft Codename Roslyn CTP - October 2011” folder in your Documents folder after installation). One in particular, called BadPainting, provides a text box for inputting C# that’s executed to add elements to a painting.

But that wasn’t enough for me; I wanted at least a Console-based command line REPL like the cool Python, JavaScript and Ruby kids have. And so, with the help of the Roslyn team (it pays to have friends in low places), I built one:

RoslynRepl Sample Download

Building it (after installing Visual Studio 2010, Visual Studio 2010 SP1, the Visual Studio 2010 SDK and the Roslyn CTP) and running it lets you do the same things that the VS REPL gives you:


In implementing my little RoslynRepl tool, I tried to stay as faithful to the VS REPL as possible, including the help implementation:


If you’re familiar with the VS REPL commands, you’ll notice that I’ve trimmed the Console version a little as appropriate, most notably the #prompt command, which only has “inline” mode (there is no “margin” in a Console window). Other than that, I’ve built the Console version of REPL for Roslyn such that it works just exactly like the one documented in the Roslyn Walkthrough: Executing Code in the Interactive Window.

Building a REPL for any language is, at you might imagine, a 4-step process:

  1. Read input from the user
  2. Evaluate the input
  3. Print the results
  4. Loop around to do it again until told otherwise


Step 1 is a simple Console.ReadLine. Further, the wonder and beauty of a Windows Console application is that you get complete Up/Down Arrow history, line editing and even obscure commands like F7, which brings up a list of commands in the history:


The reading part of our REPL is easy and has nothing to do with Roslyn. It’s evaluation where things get interesting.


Before we can start evaluating commands, we have to initialize the scripting engine and set up a session so that as we build up context over time, e.g. defining variables and functions, that context is available to future lines of script:

using Roslyn.Compilers;
using Roslyn.Compilers.CSharp;
using Roslyn.Compilers.Common;
using Roslyn.Scripting;
using Roslyn.Scripting.CSharp;
// Initialize the engine
string[] defaultReferences = new string[] { "System", ... }; string[] defaultNamespaces = new string[] { "System", ... }; CommonScriptEngine engine = new ScriptEngine(defaultReferences, defaultNamespaces);
// HACK: work around a known issue where namespaces aren't visible inside functions
foreach (string nm in defaultNamespaces) {
  engine.Execute("using " + nm + ";", session);

Session session = Session.Create();

Here we’re creating a ScriptEngine object from the Roslyn.Scripting.CSharp namespace, although I’m assigning it to the base CommonScriptEngine class which can hold a script engine of any language. As part of construction, I pass in the same set of assembly references and namespaces that a default Console application has out of the box and that the VS REPL uses as well. There’s also a small hack to fix a known issue where namespaces aren’t visible during function definitions, but I expect that will be unnecessary in future drops of Roslyn.

Once I’ve got the engine to do the parsing and executing, I creating a Session object to keep context. Now we’re all set to read a line of input and evaluate it:

ParseOptions interactiveOptions =
new ParseOptions(kind: SourceCodeKind.Interactive,
languageVersion: LanguageVersion.CSharp6);
... while (true) { Console.Write("> "); var input = new StringBuilder(); while (true) { string line = Console.ReadLine(); if (string.IsNullOrWhiteSpace(line)) { continue; } // Handle #commands ... // Handle C# (include #define and other directives) input.AppendLine(line); // Check for complete submission if (Syntax.IsCompleteSubmission(
input.ToString(), options: interactiveOptions))) {
Console.Write(". "); } Execute(input.ToString()); }

The only thing we’re doing that’s at all fancy here is collecting input over multiple lines. This allows you to enter commands over multiple lines:


The IsCompleteSubmission function is the thing that checks whether the script engine will have enough to figure out what the user meant or whether you need to collect more. We do this with a ParseOptions object optimized for “interactive” mode, as opposed to “script” mode (reading scripts from files) or “regular” mode (reading fully formed source code from files). The “interactive” mode lets us do things like “1+1” or “x” where “x” is some known identifier without requiring a call to Console.WriteLine or even a trailing semi-colon, which seems like the right thing to do in a REPL program.

Once we have a complete command, single or multi-line, we can execute it:

public void Execute(string s) {
  try {
    Submission<object> submission = engine.CompileSubmission<object>(s, session);
    object result = submission.Execute();
    bool hasValue;
    ITypeSymbol resultType = submission.Compilation.GetSubmissionResultType(out hasValue);

    // Print the results
  catch (CompilationErrorException e) {
    Error(e.Diagnostics.Select(d => d.ToString()).ToArray());
  catch (Exception e) {

Execution is a matter of creating a “submission,” which is a unit of work done by the engine against the session. There are helper methods that make this easier, but we care about the output details so that we can implement our REPL session.


Printing the output depends on the type of a result we get back:

ObjectFormatter formatter =
new ObjectFormatter(maxLineLength: Console.BufferWidth, memberIndentation: " ");
<object> submission = engine.CompileSubmission<object>(s, session); object result = submission.Execute(); bool hasValue; ITypeSymbol resultType =
submission.Compilation.GetSubmissionResultType(out hasValue); // Print the results if (hasValue) { if (resultType != null && resultType.SpecialType == SpecialType.System_Void) { Console.WriteLine(formatter.VoidDisplayString); } else { Console.WriteLine(formatter.FormatObject(result)); } }

As part of the result output, we’re leaning on an instance of an “object formatter” which can trim things for us to the appropriate length and, if necessary, indent multi-line object output.

In the case that there’s an error, we grab the exception information and turn it red:

void Error(params string[] errors) {
  var oldColor = Console.ForegroundColor;
  Console.ForegroundColor = ConsoleColor.Red;
  Console.ForegroundColor = oldColor;
public void Write(params object[] objects) {
  foreach (var o in objects) { Console.Write(o.ToString()); }

void WriteLine(params object[] objects) {



And then we do it all over again until the program is stopped with the #exit command (Ctrl+Z, Enter works, too).

Where Are We?

Executing lines of C# code, the hardest part of building a C# REPL, has become incredibly easy with Roslyn. The engine does the parsing, the session keeps the context and the submission gives you extra information about the results. To learn more about scripting in Roslyn, I recommend the following resources:

Now I’m off to add Intellisense support. Wish me luck!


Telerik: Best tech support response ever

I was playing around with the Telerik WPF controls the other day and I ran into an “issue.” It wasn’t a bug, just a bet peeve of mine, so knowing that two friends of mine, Stephen Forte and Doug Seven, both work at Telerik, I thought I’d report it. I created an account on their support web site and dropped in the following message:

From: Chris
Date: 11/17/2011 10:59:38 AM

when I'm choosing setup options in the Telerik installer, I can select options by clicking on the little, tiny box but I cannot select options by clicking on the wide, giant checkbox label. I'd really love to be able to do the latter as well as the former. thanks!

Within 24 hours, I got the best developer tech support response I’ve gotten in 30 years in this industry (damn, I’m old):

From: Telerik Admin
Date: 11/18/2011 8:09:42 AM

Hi Chris,
A very valid point indeed, thanks for sharing your opinion!
The thing is a bit tricky in terms of UX actually and I'd love your input here. Let me add some details:
The idea is that the click on the label is used for highlighting the item (thus change the displayed images) and the click on the checkbox is used for checking it.
Now, there are three approaches (different than the current one) of which I like none (maybe prefer the third actually):

1. The label click both highlights and checks/unchecks the checkbox. I don't like this one for two reasons: 1) if Telerik've set an item to be checked by default, we don't want the customer to uncheck it mistakenly and 2) if the item is unchecked by default, the customer might just want the original stuff and he'd need a second click to uncheck it back.

2. Use double-click on the label to check/uncheck the checkbox. Don't like it for it's not intuitive and noone would use it. As an example, the Windows Platform Installer has such a feature and we discovered it a year after its initial release - when we started checking it deeper.

3. Only check/uncheck a checkbox on label click if the item has already been highlighted. The drawback of this approach is that you would need two clicks to have the item state changed the first time you're on it. But still, this one seems kinda reasonable.

How do you find these?
Erjan Gavalji
the Telerik team

Explore the entire Telerik portfolio by downloading the Ultimate Collection trial package. Get it now >>

I have since learned that the Telerik developers support their own software and the benefits are obvious:

Erjan from Telerik – you’re my new hero. Thanks for answering my question so thoroughly. I have confidence that you’ll take my initial feedback and my reply to this email (show selection when you click on the checkbox label like the VS2010 installer does) and make the product even better.

It’s no wonder Telerik is an award winning software vendor. You have to love a company that’s willing to be open with their developers.

Update: as of a few days later, a new installer was posted that included my fix suggestion. Wow!


Mary Sells, 1921-2011, Rest In Peace

Mary Hohnke Sells died on a Monday afternoon on the last day of February, 2010 in her home in Fargo, ND. She had just turned 89 years old on the 18th of February. She passed away peacefully in her sleep during an afternoon nap, having been tucked in by her daughter-in-law earlier that day. She’s survived by her son J. Michael Sells, her daughter-in-law Charlene Schreiber, her grandson Chris Sells and granddaughter-in-law Melissa Plummer and her two teenaged grandsons, John Michael Sells and Thomsen Frederick Sells. She was the last of four siblings; John, Shirley and Jim have all gone ahead of her to prepare the way.

Mary died a much beloved mother, grandmother and great grandmother as well as a dear friend to most everyone she met. She was generous of spirit, baking and cooking for her friends and family almost right up until the day she died, making sure her loved ones stayed plump in her love. She was talented in the kitchen, keeping her family recipes close to her heart for only those most special in her life. She was also a mischievous soul, taking advantage of her quick mind and her family’s sympathy for her ailments in later years to cheat outrageously at games of all kinds.

Mary was born in 1921, making her a child of the Great Depression. She graduated from Fargo’s Central High School in 1940, after which she pursued a course of study in Radiology Technology. She attained her national certification in 1946 and held it for 60 years. She married her late husband John Dickenson Sells in 1946, being secretly thrilled but outwardly scandalized when he insisted on public displays of dancing and other such tom-foolery. John worked for the Northern Pacific Railway and Mary worked in multiple clinical locations as she moved with her husband to sites ranging from North Dakota to Washington state. Her first child, Mike, is 61 and a successful draftsman at a local civil engineering firm. Her second child, Gretchen, died when she was only 15 in 1969, taking some of the light from Mary’s eyes. Her husband John was soon to follow, dying in 1971 of complications following gall bladder surgery.

None of this stopped Mary from living her life, however, having gone to Seattle in 1978 to be closer to her sister Shirley and to follow her career, then moving back Fargo in 1983 to be with her son and grandson. Before her move to Seattle, Mary helped take care of her grandson during the summers when he would visit. She was a second mother to him, doting on him and spoiling him thoroughly his whole life with food and attention. Till the day she died, Chris was her “baby boy,” in spite of his age of 41 and his height of 6’5”.

Mary was a member of the Mecca Chapter of the Order of the Eastern Star and received a 50-year membership acknowledgement for her many years of service. She was a life-long member of St. Mark’s Lutheran Church, an active member in the Women of the Evangelical Lutheran Church in America and a church quilter for most of those years. Mary was also an active member of PLS and enjoyed those friendships immensely.

Later in life, when her sister Shirley was diagnosed with cancer, Mary and she, both in their 70s, made sure that Shirley’s “bucket list” was fulfilled, which included wine tours, roller coasters in Las Vegas and even, for Mary, an incident on a tipped raft in the Rouge River in Oregon. She lived a full, rich life on her own terms, never shy about what she wanted for herself and others, and always ready with advice, wanted or not.

Mary died in her own home while she still had the faculties to interact with the ones she loved, as she wanted. She will be deeply missed and felt daily in the hearts and minds of those she left behind.


The Basics of EF Validation: IDataErrorInfo

When you’re adding or updating data in your database, you really want to make sure that the data being sent to the database is good and true. Often, that’s something that can be checked in the database itself. The first thing you’ll want to do is make sure that the database has validation constraints set on the columns, like nullability or max data sizes. If you’re going EF model-first, you can set these properties on the properties of your entities. If you’re not, you can set these properties in the database or get even fancier and write triggers that check the validity of the data. Finally, you can disable insert, update and delete altogether in favor of stored procedures, changing your EF mapping to generate calls to those instead. The nice thing about checks in the database is that no matter how the data gets there, whether it’s via your EF-based app or not, the checks happen.

However, if you’d like to also put checks into your EF code, perhaps because you’d like to avoid a round-trip to the database for bad data, you can do so in your EF-enabled language of choice, e.g. C#.

Imagine a very simple EDM to describes web advertisements:


All properties on our entity type get a generated On<<PropertyName>>Changing and On<<PropertyName>>Changed method. If you want to check one property in isolation, it’s easy to provide an implementation of your partial method of choice, e.g.

namespace EdmTest {
  partial class Ad {
    partial void OnLinkChanging(string value) {
      if (!value.StartsWith("http://",
StringComparison.InvariantCultureIgnoreCase)) { throw new ArgumentOutOfRangeException("Link must start with 'http://'"); } }
} }
Here we’ve made sure that whenever we set the Link property, it must be of a certain format. If we were to violate that restriction, things go boom:


As MVC translates the form fields into values on the Ad object that is passed to the controller’s Create method, setting the Link property with a bad value triggers the exception:

// POST: /Ad/Create
public ActionResult Create(Ad ad) {...}

MVC catches the exception before the Create method is even called and the view shows the error message. Notice that the error message is not what we provided, however.

Further, sometimes there are problems on an object’s state that span more than one property. Unfortunately, because such a constraint can’t be checked on any single property change, we need to at least check it at the object level, not just at the property level.

For both of these issues, we have IDataErrorInfo.


The IDataErrorInfo interface was introduced back in the mists of time with .NET 1.x for use specifically with data binding in Windows Forms. I wouldn’t recommend that anyone invest in anything but maintenance on their WinForms apps, but ASP.NET, the Windows Presentation Foundation (WPF) and Silverlight all support IDataErrorInfo . Data binding is involved enough and GUI-framework-specific enough that you’ll need to read up on it in your favorite GUI-framework-specific book, but the IDataErrorInfo interface is exactly what we need even without data binding:

namespace System.ComponentModel {
  public interface IDataErrorInfo {
    string Error { get; }
    string this[string columnName] { get; }

Notice that IDataErrorInfo exposes error descriptions at both the object and the property/column level. It’s easy to implement this standard interface on our example Ad class:

partial class Ad : IDataErrorInfo {
  public string Error {
    get {
      // Check the ad for errors
      if (string.IsNullOrEmpty(Title) && string.IsNullOrEmpty(ImagePath)) {
        return "Must set Title or ImagePath";
      return null;

public string this[string columnName] {
get { // Check any specific property for errors switch (columnName) { case "Link": if (Link != null && !Link.StartsWith("http://",
StringComparison.InvariantCultureIgnoreCase)) { return "Link must start with 'http://'"; } break; } return null; } } }

Because each of the generated entity classes is partial, you can provide your own implementation to be merged with the generated implementation, in our case the IDataErrorInfo interface implementation. Now, when MVC hydrates an object that implements IDataErrorInfo, it’ll check to see if there are problems. To check, our controller provides the ModelState property, which itself provides the IsValid flag:

// POST: /Ad/Create
public ActionResult Create(Ad ad) {
  try {
    if (!ModelState.IsValid) { return View(); }
  catch {
    return View();

// POST: /Ad/Edit/
[HttpPost] public ActionResult Edit(Ad ad) { try { if (!ModelState.IsValid) { return View(); } ... } catch { return View(); } }

In addition to the IsValid flag, the ModelState provides a list of property name/error message pairs that show in the code that the view generator spits out when it’s creating forms, e.g.

<% using (Html.BeginForm()) {%>
<%: Html.ValidationSummary(true) %>
<%: Html.TextBoxFor(model => model.Link) %>
<%: Html.ValidationMessageFor(model => model.Link) %>

It’s in the ValidationSummary helper that shows object-level errors and the ValidationMessageFor helper that shows property-level errors:



There are other means of validation that you will want to investigate, like the validation attributes supported by MVC and SilverLight, but IDataErrorInfo is the one with the broadest reach. It’s also the one that’s simplest for you to check yourself if you’re not getting the automatic support you want from your GUI framework. For example, because the SaveChanges method on the context base class is virtual and because we’ve got the Object State Manager, we can check ourselves for object errors using IDataErrorInfo:

using System.Data.Objects;

namespace AdMan.Models {
  partial class sbdbEntities {
    public override int SaveChanges(SaveOptions options) {
      // Make sure we're detecting all changes. base.SaveChanges
      // does this, but that may be too late.
// Get all the new and updated objects var objectsToValidate = ObjectStateManager. GetObjectStateEntries(EntityState.Added | EntityState.Modified). Select(e => e.Entity).OfType<IDataErrorInfo>();
// Check each object for errors foreach (var obj in objectsToValidate) { // Check each property foreach (var property in obj.GetType().GetProperties()) { var columnError = obj[property.Name]; if (columnError != null) { throw new Exception(columnError); } }
// Check each object var objectError = obj.Error; if (objectError != null) { throw new Exception(objectError); } }
// All clear return base.SaveChanges(options); } } }

Here we’re overriding the SaveChanges method we’ve been calling all this time to take advantage of the IDataErrorInfo interface. The first call to DetectChanges is to make sure we’ve gotten all the changes into the Object State Manager (only necessary if you’re using EF POCO classes). The call to the GetObjectStateEntries method on the ObjectStateManager class produces all of the added and modified objects, their state, what the old and new values are, etc. We pull off each one of the entities that implement IDataErrorInfo and call the methods to check for property and object-level errors. If we find one, we throw an exception, otherwise we let the call to SaveChanges through.

This code isn’t needed if you’re already using a GUI framework that supports IDataErrorInfo, but it’s still handy to know you can roll your own code into SaveChanges if you need to.

Where Are We?

IDataErrorInfo is the core of data validation support in GUI libraries since .NET 1.x and while there are simpler ways to do it for individual libraries, IDataErrorInfo works just fine with MVC and EF, two of the most popular GUI libraries we’ve got just now.


EF Concurrency Mode Fixed + MVC

Imagine a very simple EDM to describes web advertisements:


Now imagine that I’d like to build a web application to manage instances of the Ad type. If multiple people are editing ads at once, especially the same set of ads, I’m likely to run into concurrency errors. By default, EF lets the last change win.

For example, if Chris and Bill are both editing Ad.Id == 1, if Chris pushes his changes to the database first, EF will not notice that the ad has been updated underneath Bill will he saves his changes and Chris’s changes will be lost. What we really would like to happen is that, when Bill attempts to save his changes, that we check if the data has changed since we cached it so that Bill gets an error and is able to merge his changes in with Chris’s.

This style of multi-user concurrency management is called “optimistic concurrency” because it assumes few people will be changing the same data at the same time. It’s the most efficient means of concurrency management when that condition is true. Another type of concurrency management is named “pessimistic concurrency,” and is generally implemented using locks on the database, which tends to slow things down.

By default, EF provides no concurrency support; if two people push changes to the same row in the database, whoever’s change goes in last wins. This results in data loss, which in the world of data is a big, fat, no-no.

The way that EF lets you decide how a row is changed is via the Concurrency Mode property on every one of the entity’s properties in the designer. By default, the Concurrency Mode is set to “None”, which results in SQL like the following when an update is needed:

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where ([Id] = @4)

The Id column is used to select whether to perform an update, so any changes made to the underlying columns for that row are not detected and are therefore lost. The way to tell EF which columns to check is with the Concurrency Mode property set from None (the default) to Fixed on an entity’s property. For example, if you set Concurrency Model to Fixed for each of the read-write properties for our sample Ad entity, the update would look like the following:

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where ((((([Id] = @4) and ([Title] = @5)) and [ImagePath] is null)
and ([Link] = @6)) and ([ExpirationDate] = @7))

This is handy, but it also requires that we keep around an entity in memory in both its original state and its updated state for the length that the user is editing it. For desktop applications, that’s not an issue, but for stateless web pages, like MVC-based web pages, it is.

It’s for this reason that the EF team itself recommends using a special read-only column just describing the “version” of the row. Ideally, whenever any of the data in a row changes, the version is updated so that when an update happens, we can check that special column, e.g.

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where (([Id] = @4) and ([TimeStamp] = @5))

Here, the TimeStamp column is our “version” column. We can add such a column in our SQL Server database using the “timestamp” type, as shown in SQL Server Management Studio here:


The semantics of the timestamp type are just what we want: every time a row is updated, the timestamp column is updated. To see this new column in the Entity Data Model, you’ll have to right-click on the designer surface and choose Update Model from Database, which results in the TimeStamp being added to our model:


The TimeStamp field will come through as type Binary, since EF4 doesn’t have direct support for it, and with a StoreGeneratedPattern of Computed (which is exactly right). To enable EF to use the new column to perform optimistic concurrency, we need only change the Concurrency Mode to Fixed.

Now, here’s a simple Edit method on our MVC controller:

// GET: /Ad/Edit/5
public ActionResult Edit(int id) {
  return View(db.Ads.Single(ad => ad.Id == id));

This kicks off the view, but with one key missing ingredient – the view doesn’t have the TimeStamp field in it; because it’s mapped in EF as binary data, the MVC form generator wouldn’t provide a field for it. To make sure we pass the version of the data along with the data itself, we have to add a field to our HTML form and, because we don’t want the user to see it, let alone edit it, we need to make it hidden:

<% using (Html.BeginForm()) {%>
<%: Html.HiddenFor(model => model.TimeStamp) %>
<% } %>

The Html.HiddenFor is an MVC helper that produces HTML that looks like so:

<input id="TimeStamp" name="TimeStamp" type="hidden" value="AAAAAAAAB9E=" />

Now, when we press the Save button, the SQL we saw earlier is invoked to use the ad’s unique ID as well as the version (our timestamp column). If there’s a concurrency problem, i.e. somebody else has updated the underlying row in the database since we cached our values on the HTML form, we get an exception:


The message is saying that no rows were updated, which happens when the timestamp of the underlying row no longer matches. To provide a more helpful message, you’ll want to catch the specific error yourself:

// POST: /Ad/Edit/
public ActionResult Edit(Ad ad) {
  try {
    if (!ModelState.IsValid) { return View(); }
    // Attach the ad to the context and let the context know it's updated
    db.ObjectStateManager.ChangeObjectState(ad, EntityState.Modified);
    return RedirectToAction("Index");
  catch (OptimisticConcurrencyException ex) {
    ModelState.AddModelError("", "Oops! Looks like somebody beat you to it!");
    return View(ad);

Here we’re catching the OptimisticConcurrencyException and setting our own message before sending the user back to their data for them to grab what they want and try again.

Where Are We?

EF works great with MVC, but in the case of optimistic concurrency, you’ve got to work around the stateless model of the web a little to get it working just the way you like.


Using SQL Server Profiler with EF

I’m a big fan of the SQL Server profiler for figuring out what the Entity Framework (EF) is really doing on the wire. If you’re unfamiliar with how to use the profiler, the easiest thing to do once you’ve got it started is File | New Trace. It will ask to which database you’d like to connection and then pop-up the Trace Properties window. If you click on the Events Selection tab, you can filter the events you see. For tracing EF, it doesn’t matter what type of events we see, but it does matter from whom they come. To see EF calls (or any ADO.NET calls) against your database, the easiest thing to do is to press the Column Filters button and set the ApplicationName to be like “.NET SqlClient Data Provider”:


When you press the Run button, you’ll see a rolling list of calls made to that instance of SQL Server from EF. Now when you run an EF program, you’ll see exactly what SQL that EF is generating for SQL Server:


This is a handy technique to see whether EF batches SQL statements when you call SaveChanges (EF4 doesn’t batch) or how many round-trips lazy loading will cost you (lots – prefer the Include method).


Be Careful with Data Services Authentication + Batch Mode

I was doing something quite innocent the other day: I was trying to provide authentication on top of the .NET 4.0 WCF Data Services (DS) on a per method basis, e.g. let folks read all they want but stop them from writing unless they’re an authorized user. In the absence of an authorized user, I threw a DataServicesException with a 401 and the right header set to stop execution of my server-side method and communicate to the client that it should ask for a login.

In addition, on the DS client, also written in .NET 4.0, I was attempting to use batch mode to reduce the number of round trips between the client and the server.

Once I’d cleared away the other bugs in my program, it was these three things in combination that caused the trouble.

The Problem: DataServicesException + HTTP 401 + SaveChanges(Batch)

Reproducing the problem starts by turning off forms authentication in the web.config of a plain vanilla ASP.NET MVC 2 project in Visual Studio 2010, as we’re going to be building our own Basic authentication:


Next, bring in the Categories table from Northwind into a ADO.NET Entity Data Model:


The model itself doesn’t matter – we just need something to allow read-write. Now, to expose the model, add a WCF Data Service called “NorthwindService” and expose the NorthwindEntities we get from the EDMX:

public class NorthwindService : DataService<NorthwindEntities> {

  public static void InitializeService(DataServiceConfiguration config) {
    config.SetEntitySetAccessRule("Categories", EntitySetRights.All);
    config.DataServiceBehavior.MaxProtocolVersion =

Notice that we’re allowing complete read/write access to categories on our service, but what we really want is to let everyone read and only allow authenticated users to write. We can do that with a change interceptor:

public void OnChangeCategory(Category category, UpdateOperations operation) {
  // Authenticate
  string[] userpw = GetCurrentUserPassword();
  if (userpw == null ||
    !userpw[0].Equals("admin", StringComparison.CurrentCultureIgnoreCase) ||
    !userpw[1].Equals("pw")) {

AddHeader("WWW-Authenticate", "Basic realm=\"Northwind\""); throw new DataServiceException(401, "Unauthorized");
} } // Use HTTP Basic authentication string[] GetCurrentUserPassword() { string authorization = HttpContext.Current.Request.Headers["Authorization"]; if (string.IsNullOrEmpty(authorization)) { return null; } if (!authorization.StartsWith("Basic")) { return null; } byte[] base64 = Convert.FromBase64String(authorization.Substring(6)); string[] userpw = Encoding.ASCII.GetString(base64).Split(':'); if (userpw.Length != 2) { return null; } return userpw; }

The change interceptor checks whether the client program provided a standard HTTP Basic authentication header and, if so, pulls out the admin user name/password pair. If it isn’t found, we set the “WWW-Authenticate” header and throw a DataServicesException, which will turn into an HTTP error response, letting the client know “I need some credentials, please.”

The code itself is very simplistic and if you want better code, I recommend Alex James’s most excellent blog series on Data Services and Authentication. However, it’s good enough to return a 401 Authorized HTTP error back to the client. If it’s the browser, it’ll prompt the user like so:


The browser isn’t a very interesting program, however, which is why I added a service reference for my new service to my plain vanilla console application and wrote this little program:

class Program {
  static void Main(string[] args) {
    var service =
new NorthwindEntities(new Uri(@"http://localhost:14738/NorthwindService.svc"));

service.Credentials = new NetworkCredential("admin", "pw");
var category = new Category() { CategoryName = "My Category" }; service.AddToCategories(category); //service.SaveChanges(); // works service.SaveChanges(SaveChangesOptions.Batch); // #fail

Console.WriteLine(category.CategoryID); } }

Here we’re setting up the credentials for when the service asks, adding a new Category and calling SaveChanges. And this is where the trouble started. Actually, this is where the trouble ended after three days of banging my head and 4 hours with the WCF Data Services team (thanks Alex, Pablo and Phani!). Anyway, we’ve got three things interacting here:

  1. The batch mode SaveChanges on the DS client which bundles your changes into a send OData round-trip for efficiency. You should use this when you can.
  2. The DataServicesException which bundles extra information about your server-side troubles into the payload of the response so that a knowledgeable client, like the .NET DS client, can pull it out for you. You should use this when you can.
  3. The HTTP authentication scheme which doesn’t fail when it doesn’t get the authentication it needs, but rather asks for the client to provide it. You should use this when you can.

Unfortunately, as of .NET 4.0 SP0, you can’t use all of these together.

What happens is that non-batch mode works just fine when our server sends back a 401 asking for login credentials, pulling the credentials out of the server reference’s Credentials property. And so does batch mode.

However, where batch mode falls down is with the extra payload data that the DataServicesExpection packs into the HTTP error resposne, which confuses it enough so that the exception isn’t handled as a request for credentials, but rather reflected back up to the client code. It’s the interaction between all three of these that causes the problem, which means that until there’s a fix in your version of .NET, you need a work-around. Luckily, you’ve got three to choose from.

Work-Around #1: Don’t Use DataServiceException

If you like, you can turn off the extra information your service endpoint is providing with the DataServiceException and just set the HTTP status, e.g.

HttpContext.Current.Response.AddHeader("WWW-Authenticate", "Basic realm=\"Northwind\"");
//throw new DataServiceException(401, "Unauthorized");
HttpContext.Current.Response.StatusCode = 401;
HttpContext.Current.Response.StatusDescription = "Unauthorized";

This fix only doesn’t work with Cassini, but Cassini doesn’t work well in the face of HTTP authentication anyway, so moving to IIS7 should be one of the first things you do when facing an authentication problem.

Personally, I don’t like this work-around as it puts the onus on the service to fix a client problem and it throws away all kinds of useful information the service can provide when you’re trying to test it.

Work-Around #2: Don’t Use Batch-Mode

If you use “SaveChanges(SaveChangesOptions.None)” or “SaveChanges()” (None is the default), then you won’t be running into the batch-mode problem. I don’t like this answer, however, since batch-mode can significantly reduce network round-trips and therefore not using it decreases performance.

Work-Around #3: Pre-Populate the Authentication Header

Instead of doing the “call an endpoint,” “oops I need credentials,” “here you go” dance, if you know you’re going to need credentials (which I argue is most often the case when you’re writing OData clients), why not provide the credentials when you make the call?

var service =
new NorthwindEntities(new Uri(@http://localhost/BatchModeBug/NorthwindService.svc));

service.SendingRequest += delegate(object sender, SendingRequestEventArgs e) { var userpw = "admin" + ":" + "pw"; var base64 = Convert.ToBase64String(Encoding.ASCII.GetBytes(userpw)); e.Request.Headers.Add("Authorization", "Basic " + base64); };

Notice that we’re watching for the SendingRequest event on the client-side so that we can pre-populate the HTTP Authentication header so the service endpoint doesn’t have to even ask. Not only does this work around the problem but it reduces round-trips, which is a good idea even if/when batch-mode is fixed to respond properly to HTTP 401 errors.


Enabling the Tip Calculator in Your Brain

I can’t imagine anyone reading this blog needs to read this, but I can’t help myself.

When I was just a wee lad, probably the most valuable thing I learned was how to perform mathematical estimation, the importance of which and several techniques you can get by reading Jon Bentley’s The Back of the Envelope (this essay along with several others, are collected in his most excellent books Programming Pearls and More Programming Pearls, both of which are still relevant a decade later). Not only is estimation generally quicker than running a calculator, but even when you do run a calculator, it helps you figure out when you did it wrong, the latter of which has saved my bacon time and again.

For example, as much as I love the Windows Phone 7 marketplace and it’s quality and quantity of applications, the ones that puzzle me are the “tip calculator” apps (several!). I don’t understand why it’s worth the trouble of pulling out your phone and punching buttons when you can know the tip instantly.

For example, let’s assume the dinner bill is $37.42. If the service was bad, that’s a 10% tip (you have to tip them something ‘cuz the IRS assumes you will and taxes them accordingly – bastards). So, with a 10% tip, take the bill and move it right one decimal point: $3.74. Now, round up or down depending on how bad the service was, e.g. $3.50 or $4. Quick and easy.

Assuming the service was great, that’s a 20% tip, so double the bill and move it right one decimal point, making the math easier for yourself, e.g. $37.42 is close to $35, doubling is $70, so a $7 tip. Boom: 20% tip.

If you want to get fancy and provide a 15% tip for good but not great, then average the two numbers: ($4 + $7)/2 = $5.50. Zim zam zoom.

Honestly, as great as the apps are on your phone, tablet or BlueTooth headset (seriously), think about using the apps in your head first. Now only are they quicker and cheaper, but using them staves off dementia (which is a good thing!).

Oh, and if the tip is added as a mandatory minimum, then the additional tip is easy: $0.00. I don’t deal well with authority.


Windows Phone 7: Beating Expectations

Years ago, when I was on my T-Mobile Dash, I would purchase a new phone every quarter or so, just to see if something better had come along. Always, within a week or so, I returned it and went happily back to my T-Mobile Dash. Then came the iPhone, which I instantly fell in love with. I didn’t think I’d ever give it up. Then came the Samsung Focus, one of the first Windows Phone 7 phones and I haven’t turned my iPhone 4 back on since. It’s not all I’d hoped for, but it’s damn close!

Meeting Expectations

Let’s check my list and see how WP7 did:

Stuff I Forgot To Ask For

I believe that the universe gives you what you ask for and in this case, even if I didn’t get everything, there was even some stuff I forgot to request:

Beating Expectations

Seriously, ever day is something new and cool on this phone. I continue to get blown away by features I never thought I’d want that have really changed how I use my phone:

Where Are We?

According to my math, I got a little more half what I asked for, but true love can’t be measured in percentages. Of the features that I’m missing, only camera quality, copy-paste and Kindle are things I actually miss from my iPhone 4, and two of those are supposed to be fixed in software RSN.

On the other hand, my Samsung Focus has giving me more than a dozen things I never thought to ask fore and really use. The full calendar support, contact linking, voice dialing (with great Bluetooth support), voice searching, the auto-correct on the keyboard, the location and phone number recognition and OneNote sync’ing make this phone a delight to use every day.


If you want something from eBay, don’t bid on it!

I’m fond of quoting my father to my sons. I have a terrible memory for these kinds of things in general, but what he says sticks with me:

I’ve learned a ton of things from my father and continue to do so, so when I wanted to win something on eBay as a Christmas present for my girlfriend/fiancé’ (what’s it called when you’re engaged to be engaged?), I knew he had the experience, so I tapped it. And here’s what he told me:

If you really want something on eBay, don’t bid on it; that only gives your competition information on how to outbid you.

Instead, set yourself some free time when the auction is going to happen and start up two browser window at the following pages:

  1. The page where the count-down timer is shown.
  2. The page where you have already entered your top bid and are poised at the Confirm Bid button.

The idea is that people’s “top bid” changes over time as the auction goes on. I know this happens to me:

“Oh, this is only worth $20 to me. Well, maybe $25. OK, $40, but that’s all. Dammit I gotta have it! Where’s the button to enter the Social Security number of my first born!?”

So, instead of putting in your top bid and walking away, which lets other folks probe your top bid with their top bid and deciding later that their top bid goes toppier, wait ‘til the last minute to put in your bid. I believe the practice is called “sniping” and there are even apps that do it, although so far, I’ve found IE and a cool hand sufficient.

Of course, the most important question is this:

“Dad, at what time in the countdown do I press the Confirm Bid button?”

“Oh, well, I do it at 4 seconds, but my computers are slow.”

What can I say; the man’s a pro.


Fluent-Style Programming in JavaScript

I’ve been playing around with JavaScript a great deal lately and trying to find my way. I last programmed JS seriously about 10 years ago and it’s amazing to me how much the world has changed since then. For example, the fifth edition of ECMAScript (ES5) has recently been approved for standardization and it’s already widely implemented in modern browsers, including my favorite browser, IE9.

Fluent LINQ

However, I’m a big C# fan, especially the fluent API style of LINQ methods like Where, Select, OrderBy, etc. As an example, assume the following C# class:

class Person {
  public Person() { Children = new List<Person>(); }
  public string Name { get; set; }
  public DateTime Birthday { get; set; }
  public int Age { get { return (int)((DateTime.Now - Birthday).Days / 365.25); } }
  public ICollection<Person> Children { get; private set; }
  public override string ToString() { return string.Format("{0} ({1})", Name, Age); }
Creating a set of them is a matter of using the C# member initialization syntax:
var chris = new Person() {
  Name = "Chris",
  Birthday = new DateTime(1969, 6, 2),
  Children = {
    new Person() {
      Name = "John",
      Birthday = new DateTime(1994, 5, 5),
    new Person() {
      Name = "Tom",
      Birthday = new DateTime(1995, 8, 30),
Without even realizing it, I used the word “set” to describe how I think of this collection of people. That’s how the LINQ methods encourage me to think and I like it – I’m working on sets of things, whether I’m filtering them, projecting them, ordering them, aggregating them in some way, etc. For example, here’s how I would print all of the items in my set:
var people = new Person[] { chris }.Union(chris.Children);
Console.WriteLine("People: " +
  people.Aggregate("", (s, p) => s + (s.Length == 0 ? "" : ", ") + p.ToString()));
Console.WriteLine("Teens: " +
  people.Where(p => p.Age > 12 && p.Age < 20).
    Aggregate("", (s, p) => s + (s.Length == 0 ? "" : ", ") + p.ToString()));
Here I’m using the Union method to combine all three elements into a new set, the Where method to filter one set to produce another and the Aggregate method to produce a single string. The output should be unsurprising:
People: Chris (41), John (16), Tom (15)
Teens: John (16), Tom (15)

Fluent JavaScript

LINQ is lousy with such set-oriented functions, e.g. Select, OrderBy, Take, Skip, First, Single, etc. and I use them a ton in my code (and my scripts). As it turns out, JS has these methods, too, both provided by popular JS toolkits like jQuery and in the language itself. Specifically, ES5 has a number of lovely methods for fluent programming. For example, given the same “class” in JS:
// Person constructor
function Person(args) {
  if ( { =; }
  if (args.birthday) { this.birthday = args.birthday; }
  if (args.children) { this.children = args.children; }

// Person properties and methods
Person.prototype = Object.create(null, {
  name: { value: "", writable: true },
  birthday: { value: new Date(), writable: true },
  age: { get: function () { return Math.floor((new Date() - this.birthday) / 31557600000); } },
  children: { value: [], writable: true },
  toString: { value: function () { return + " (" + this.age + ")"; } }

I can do several LINQ-style things on it:

var s = "";
var tom = new Person({ name: "tom", birthday: new Date(1995, 7, 30) });
var john = new Person({ name: "john", birthday: new Date(1994, 4, 5) });
var chris = new Person({ name: "chris", birthday: new Date(1969, 5, 2), children: [tom, john] });
var people = [tom, john, chris];

// select
s += "<h1>people</h1>" + (p) { return p; }).join(", ");

// where
s += "<h1>teenagers</h1>" + people.filter(function (p) { return p.age > 12 && p.age < 20 }).join(", ");

// any
s += "<h1>any person over the hill?</h1>" + people.some(function (p) { return p.age > 40; });

// aggregate
s += "<h1>totalAge</h1>" + people.reduce(function (totalAge, p) { return totalAge += p.age; }, 0);

// take
s += "<h1>take 2</h1>" + people.slice(0, 2).join(", ");

// skip
s += "<h1>skip 2</h1>" + people.slice(2).join(", ");

// sort
s += "<h1>sorted by name</h1>" + people.slice(0).sort(
  function (lhs, rhs) { return; }).join(", ");

// dump
document.getElementById("output").innerHTML = s;

Notice that several things are similar between JS and C# LINQ-style:

The output looks as you’d expect:


We’re not all there, however. For example, the semantics of the LINQ First method are to stop looking once a match is found. Those semantics are not available in the JS filter method, which checks every element, or the JS some method, which stops once the first matching element is found, but returns a Boolean, not the matching element. Likewise, the semantics for Union and Single are also not available as well as several others that I haven’t tracked down. In fact, there are several JS toolkits available on the internet to provide the entire set of LINQ methods for JS programmers, but I don’t want to duplicate my C# environment, just the set-like thinking that I consider language-agnostic.

So, in the spirit of JS, I added methods to the build in types, like the Array type where all of the set-based intrinsics are available, to add the missing functionality:

Object.defineProperty(Array.prototype, "union", { value: function (rhs) {
  var rg = this.slice(0);
  rhs.forEach(function (v) { rg.unshift(v); })
  return rg;

Object.defineProperty(Array.prototype, "first", { value: function (callback) {
  for (var i = 0, length = this.length; i < length; ++i) {
    var value = this[i];
    if (callback(value)) { return value; }
  return null;

Object.defineProperty(Array.prototype, "single", { value: function (callback) {
  var result = null;
  this.forEach(function (v) {
    if (callback(v)) {
      if (result != null) { throw "more than one result"; }
      result = v;
  return result;

These aren’t perfectly inline with all of the semantics of the built-in methods, but they give you a flavor of how you can extend the prototype, which ends up feeling like adding extension methods in C#.

The reason to add methods to the Array prototype is that it makes it easier to continue to chain calls together in the fluent style that started all this experimentation, e.g.

// union
s += "<h1>chris's family</h1>" +
[chris].union(chris.children).map(function (p) { return p; }).join(", ");

Where Are We?

If you’re a JS programmer, it may be that you appreciate using it like a scripting language and so none of this “set-based” nonsense is important to you. That’s OK. JS is for everyone.

If you’re a C# programmer, you might dismiss JS as a “toy” language and turn your nose up at it. This would be a mistake. JS has a combination of ease-of-use for the non-programmer-programmer and raw power for the programmer-programmer that makes it worth taking seriously. Plus, with it’s popularity on the web, it’s hard to ignore.

If you’re a functional programmer, you look at all this set-based programming and say, “Duh. What took you so long?”

Me, I’m just happy I can program the way I like to in my new home on the web. : )


Using LINQPad to Run My Life: Budgeting

I use LINQPad all the time for a bunch of stuff, but most recently and most relevant, I’ve been using it for a personal chore that isn’t developer-related: I’ve been using it to do budgeting.

What is LINQPad?

LINQPad is an interactive execution environment for LINQ queries, statements or programs. The typical usage model is that you point LINQPad at a SQL database or an OData endpoint via a dialog box and then start writing queries against the tables/collections exposed by that connection, e.g.


Here, you can see that I’ve added a connection on the left to the Northwind database, typed a query into the text box (I paid the $25 for the auto-completion module), executed the query and got the table of results below. If I want to operator over multiple results, including dumping them for inspection, I can do so by switch from C# Expression to C# Statements:


Notice the use of “Dump” to see results along the way. If I want to, I can switch to C# Program, which gives me a main and allows me to create my own types and methods, all of which can be executed dynamically.

To save queries, notice the “My Queries” tab in the lower left. I use this for things I run periodically, like the ads on my web site that are going to expire, some data cleanup I want to get back to and, the subject of today: budgeting.

Budgeting with and LINQPad

For the uninitiated, is a free online personal financial management site. At its core, it uses financial account, loan and asset information that lets it log into various financial sites and grab my data for me, e.g.,,, etc. It uses this to let me categorize transactions so that it can do budgeting for me. However, it doesn’t give me the control I want, so I write programs against this unified transaction information. Essentially, I re-categorize each transaction to my own set using a map I maintain in an Excel file, then compare the amount I spend each month against what my budget amount is, maintained in another sheet in that same Excel file. Because doesn’t provide a programmatic API (OData would be a godsend!), I pull down my transaction history as a CSV file that the web site provides for me, which I then translate to an Excel file.

Once I have these three Excel sheets, the translation history, the category map and the category budget amounts, I bring these pieces of data into my LINQPad script:

void Main() {
    var mintExcel = ExcelProvider.Create(@"D:\data\finances\2010-08-25 mint transactions.xlsx");
    var minDate = new DateTime(2010, 8, 1);
    var txs = mintExcel.GetSheet<Tx>().Where(t=>t.Date>=minDate);
    var debits = txs.Where(tx => tx.Type == "debit");

    var classExcel = ExcelProvider.Create(@"d:\data\finances\2010-08-03 mint category map.xlsx");
    var map = classExcel.GetSheet<CategoryClass>().ToList();
    var classBudget = classExcel.GetSheet<ClassBudget>().ToList();
    var unclassified = new ClassBudget() { Class = "UNCLASSIFIED" };
    var classifiedDebits = debits.
      Select(d => new { d.Date, d.Description, Amount = d.Amount, d.Category, Class = GetClass(map, d) }).
      Where(d => d.Class != null);
    // TODO: break this down by month
    // TODO: sum this by ytd
    var classifiedTotals =
        from d in classifiedDebits
        group d by d.Class into g
        let b = classBudget.FirstOrDefault(b=>b.Class == g.Key) ?? unclassified
        let total = g.Sum(d=>d.Amount)
        select new {
            Class = b.Class,
            BudgetAmount = b.Amount,
            ActualAmount = total,
            AmountLeft = b.Amount - total,
            TxCount = g.Count(),
            Transactions = from tx in g.OrderBy(tx=>tx.Date)
                select new { Date = tx.Date.ToString("M/d/yy"), tx.Description, tx.Category, tx.Amount }

static string GetClass(List<CategoryClass> map, Tx tx) {
  CategoryClass cc = map.FirstOrDefault(m => m.Category == tx.Category);
  if( cc != null ) { return cc.Class; }
  return null;

[ExcelSheet(Name = "transactions(1)")]
public class Tx {
    [ExcelColumn()] public DateTime Date { get; set; }
    [ExcelColumn()] public string Description { get; set; }
    [ExcelColumn()] public decimal Amount { get; set; }
    [ExcelColumn(Name = "Transaction Type")] public string Type { get; set; }
    [ExcelColumn()] public string Category { get; set; }
    [ExcelColumn(Name = "Account Name")] public string AccountName { get; set; }

[ExcelSheet(Name = "Sheet1")]
public class CategoryClass {
    [ExcelColumn()] public string Category { get; set; }
    [ExcelColumn(Name="Classification")] public string Class { get; set; }

[ExcelSheet(Name = "Sheet2")]
public class ClassBudget {
    [ExcelColumn(Name="Classification")] public string Class { get; set; }
    [ExcelColumn()] public decimal Amount { get; set; }
    public int Transactions { get; set; }

There are some interesting things to notice about this script:

LINQPad Output

By default, the output from my budgeting program looks like this (w/ my personal financial details blacked out):


Some things to notice:


Bringing in Excel

To bring my Excel data into LINQPad, which supports LINQ to SQL, EF and OData natively but not Excel, I have to right-click on the design surface, choose query properties and tell it about where the source code and namespace is that defines the Excel LINQ Query Provider:




The thing that makes this app really work for me is the REPL nature. It’s very immediate and I can see where my money is going with very little ceremony. It’s really the intelligence of the Dump command that keeps me from moving this app to WPF. Dump gives me the view I need to understand where my money goes and it gives me the programming surface to slice/dice the data the way I want to. I have no control out of the box in WPF that’s even close to as useful.

However, Even though I could extend LINQPad myself, there is no integrated support for Excel or CSV files. Further, for some stupid reason, I have to load the files into a running instance of Excel for them to load in LINQPad, which pisses me off because the error messages are ridiculous. Also, there is no intrinsic support for multiple data sources; instead I need to build that myself.

Further, I had one case where I couldn’t figure out an error (it was that I forgot to load the data into Excel) and had to have a real debugger, which LINQPad didn’t have. The good news was that I was able to copy/paste my code into a console application and debug it, but the bad news was that I really missed the Dump command when I was running inside Visual Studio.

Where Are We?

I really love LINQPad. In fact, I find myself wanting the same functionality for other uses, e.g. SQL (real SQL), JavaScript and as a shell. It’s the interactive data access that makes it for me – munge some data, look at it, repeat. It doesn’t quite do everything I want, though – where’s the full-featured, all-data, Swiss army knife for data?


Management vs. Motivation

“If you want to build a ship, don’t drum up people to gather wood, divide the work, and give them orders. Instead, teach them to yearn for the vast and endless sea."

Antoine De Saint-Exupery, author of "The Little Prince"


LINQ Has Changed Me

In the old days, the post-colonial, pre-LINQ days of yore, I’d have written a one-way MD5 encryption like so:

static string GetMD5String(string s) {
  MD5 md5 = new MD5CryptoServiceProvider();
  byte[] hash = md5.ComputeHash(Encoding.ASCII.GetBytes(s));
  StringBuilder sb = new StringBuilder();
  foreach( byte b in hash ) sb.AppendFormat("{0:x2}", b);
  return sb.ToString();

This implementation is fine and has served me well for 10 years (I pulled it out of the first .NET project I ever really did). However, after using LINQ for so long, it’s hard not to see every problem as an operation over sets:

static string GetMD5String(string s) { 
  return (new MD5CryptoServiceProvider()). 
    Aggregate(new StringBuilder(), (working, b) => working.AppendFormat("{0:x2}", b)). 

I can’t say that the LINQ version is any better, but it felt better. However, you’ll notice that I’m not using any of the LINQ keywords, e.g. “select”, “where”, etc. I find that I don’t really use them that much. It’s too jarring to mix them, e.g. “(from f in foos select f).Take(3)”, since not everything has a LINQ keyword equivalent. I tend to do “LINQ-less LINQ” more often then not.

P.S. I assume someone will be able to tell me how I can do it better. : )

P.P.S. I’m using the Insert Code for Windows Live Writer add-in. I love WLW!


2601 older posts       30 newer posts