Marquee de Sells: Chris's insight outlet via ATOM 1.0 csells on twitter

You've reached the internet home of Chris Sells, who has a long history as a contributing member of the Windows developer community. He enjoys long walks on the beach and various computer technologies.




Future Proof Your Technical Interviewing Process: Hiring or Not

This is the last in a 4-part series on how to interview well. Parts 1-3 covered the phone screen, the technical interview and the fit interviews. In his part, we’ll wrap up by talking about how to make the hiring decision.

Make Time For Questions

As important as what questions you ask the candidate are leaving time for them to ask their questions. Remember that they're interviewing you, too. Be open and honest about the answers; technical people have a sensitive bullshit detector, so don't try to pretend that everything is perfect; they’ll know if you’re not being sincere. However, it’s a fine line. If you find yourself dwelling on the negative, you have to wonder if you've found a good fit for yourself.

Also, don't forget to factor their questions into your own thinking about the candidate. The questions they ask about a job and a team they're going to be spending 40+ hours/week with is as good an indicator of how they think as anything else.

Making the Call

As you pass the interview candidate from person to person, make sure that you spend a few minutes in private with the next interviewer talking about what you heard that you liked as well as things you'd like them to circle back on. You want to give them an opportunity to try again, either to convince you it's not an issue or to confirm that it is.

Every interviewer should share their thoughts about the candidate soon while they're fresh. You can send an email around to the team as you finish or get together in the same room after the candidate has headed home, but it should be the same day; those first impressions matter.
Ultimately each interviewer will provide three pieces of information: a thumbs up/down (whether you use actual thumbs for this process is up to you : ), a confidence level (do you really love this person? are you on the fence?) and an explanation (“I loved how they think about the customer!” or “They never figured out how to efficiently search an infinite space of possible solutions.”)

The set of interview results will come out in three ways:

  1. Everyone loved that candidate. Hire them.
  2. Everyone hated the candidate. Don't hire them. Be polite!
  3. There's a mix. Discuss. Potentially get more info.

Of course, options #1 and #2 are easy to deal with. Unfortunately, option #3 is where most candidates fall. The question is, what do you do with a candidate with mixed results? If you're following the principle that it's better to send a good candidate away then to hire a bad candidate, then you'll pass on them. However, you'll want to spend some extra time on candidates like these. Discuss it amongst the team. See how adamant the thumbs up voters are and why. See how adamant the thumbs down voters are and why. If the candidate is on the fence but leaning towards "hire," pick someone else to talk to them and/or get them into a different environment, e.g. the bar down the street or the bowling alley at the company Xmas party, and see how they do.

Ultimately, it boils down to one thing: does the team as a whole want to bring the candidate into the team? If so, great. If not, let them go. Certainly a senior member of the company or department can override the team and hire a candidate above their objections, but I wouldn't recommend it. You're much more likely to hurt a good team in those situations then to help it.

Where Are We?

Whether you agree with the specifics of this process or not, I encourage you to spend the time to really examine your process. You want the team you build to be more than the sum of the parts, but that kind of magic requires first that you have great parts.

0 comments




Future Proof Your Technical Interviewing Process: The Fit Interviews

If you just found yourself here, you’ve stumbled onto a multi-part series on the technical interviewing process. Part 1 covered the phone screen and part 2 covered the technical interview. Today we’re going to discuss the “fit” interviews, that is, team and cultural fit. Enjoy.

The Team Fit Interview

Modern software development is done in teams. You want to be able to judge any candidate as a productive, positive member of your team. They don't necessarily have to have experience doing things the way you do them, but they should show the ability to adapt when issues arise. Your job in the team fit interview is to break the important things that happen in your team into situations that you can ask your candidate about. The following are pretty standard examples:

However, you have to be careful here. Pretty much anyone can give you the "right" answers to these questions, but you don't want the "right" answers – you want the real answers. How does a candidate actually behave in the face of these situations?
The best way I know of to get the real answers out of someone is something called Behavioral Interviewing. The idea is simple: instead of asking someone how they would act if faced with a certain situation, ask them to describe an example in their past when they've had to deal with that situation. Discuss it with them. How did their strategy work for them? What did they learn? What would they do differently?

Just this one shift from “how would you deal with this situation” to “how did you deal with this situation” will get you a much deeper look into how a candidate actually behaves, which allows you to decide if they're a good fit for your team.

The Cultural Fit Interview

This goal of the cultural fit interview is to figure out if the candidate will like their new working environment and whether the team will be glad to have them. It's enormously important and very difficult to access. One typical way to approach this type of interview is to ask the following kinds of questions, also in a Behavioral Interviewing style:

These questions are much more vague and really meant to start a conversation, but they're also very hit-and-miss. If you happen to hit the right path, you can really crack a candidate open like a ripe nut. Or not.

Also, you want to be careful how you interpret the answers. If you don't filter out people that aren't a good fit for the culture of the company, they'll be unhappy and you'll be unhappy. On the other hand, if you filter too much, you'll lose out on the benefits of diversity. It's a hard line to walk.

Another way to approach a culture fit interview is to get creative. Maybe invite the person to a company event, perhaps a semi-public mixer or a Friday afternoon beer bash. Maybe sit down with the team over lunch and play a game together. Maybe sit in the café and grab lunch in a small group and see how the conversation goes.

I think the key to finding a good fit culturally is to spend time during a series of technical interviews not talking about technology. Involving a candidate in something that the team does for fun can go a long way towards finding a great new member for your team.

Next Time

Tune in next time for when we wrap this series up and talk about how to make the hiring decision.

0 comments




Future Proof Your Technical Interviewing Process: The Technical Interview

It’s incredibly important to interview well as you’re building your technical team. Further, interviewing well is hard to do and, like anything, you only get out of it what you put into it. In part 1 of this series, we discussed the phone screen. In this part, we’ll discuss the technical interview.

The Technical Interview

The only way to really know if someone can deliver technically is to give them a problem to solve and watch them solve it. You can do this with simple data structure problems on the whiteboard, test questions on paper, algorithm problems in notepad, real-world problems with some pair programming or puzzle problems with them waving their hands wildly in the air. In a technical interview, you should encourage the candidate to think out loud, because you care more about how they go about solving the problem then actually getting to an answer. You will look for the following things:

This last one is the one I tend to focus on the most. Even more important than a candidate having knowledge of the technologies you're going to ask them to use is their ability to understand new technologies over time.

My father always says that while teenage drivers hopped up on testosterone may get into the most accidents, they're the ones that push the cars to see what they will do. You want to hire engineers that have pushed technologies past their limits for the pure joy of it. Those are going to be the ones that build the deep knowledge and can adapt in the future to whatever comes their way.

I filter for deep understanding by not just digging into not only the "how" of whatever they claim to know best, but also the "why." They may know how to build a factory in Angular, but do they understand what a factory is and why Angular does it that way? They may know how to manage their resources in the face of the JVM's garbage collector, but do they know why we use garbage collection and what the downsides are? Do they understand what canvas is good for, what SVG is good for and when to choose which?

The key here is that past behavior indicates future behavior – if they're developed deep understanding of the technologies they've learned before, chances are pretty good that they're going to be able to do that for the new technologies your team adopts in the future. There is no better way to understand how well they’re going to do on future technical challenges than hearing how they’ve handled such challenges in the past and seeing how they do it right in front of you.

What’s Next in This Series

However, the technical fit is not the only thing you need to look for – you also want to make sure that they will fit in well on your team and the company culture overall. We’ll talk about these in the next piece in this series.

0 comments




Future Proof Your Technical Interviewing Process: The Phone Screen

In 30 years, I've done a lot of interviewing from both sides of the table. Because of my chosen profession, my interviewing has been for technical positions, e.g. designers, QA, support, docs, etc., but mostly for developers and program managers, both of which need to understand a system at the code level (actually, I think VPs and CTOs need to understand a system at the code level, too, but the interview process for those kinds of people is a superset of what I'll be discussing in this series).

In this discussion, I'm going to assume you've got a team doing the interview, not just a person. Technical people need to work well in teams and you should have 3-4 people in the interview cycle when you're picking someone to join the team.

The Most Important Thing!

Let me state another assumption: you care about building your team as much as you care about building your products. Apps come and go, but a functional team is something you want to cherish forever (if you can). If you just want to hire someone to fill a chair, then what I'm about to describe is not for you.

The principle I pull from this assumption is this: it's better to let a good candidate go then to hire a bad one.

A bad hire can do more harm than a good hire can repair. Turning down a "pretty good" candidate is the hardest part of any good interview process, but this one principle is going to save you more heartache than any other.

The Phone Screen

So, with these assumptions in mind, the first thing you always want to do when you've got a candidate is to have someone you trust do a quick phone screen, e.g. 30 minutes. This can be an HR person or someone that knows the culture of the company and the kind of people you're looking for. A phone screen has only one goal: to avoid wasting the team's time. If there's anything that's an obvious mismatch, e.g. you require real web development experience, but the phone screen reveals that the candidate really doesn’t, then you say "thanks very much" and move on to the next person.

If it's hard to get a person to come into your office -- maybe they're in a different city -- you'll also want to add another 30 minutes to do a technical phone screen, too, e.g.

Whatever it is, you want to make reasonably sure that they're going to be able to keep up with their duties technically before you bring them on site, or you’re just wasting the team’s time.

At this point, if you're hiring a contractor, you may be done. Contractors are generally easy to fire, so you can bring them on and let them go easily. Some companies start all of their technical hires as contractors first for a period of 30-90 days and only hire them if that works out.

If you’re interviewing for an FTE position, once they’ve passed the phone screen, you're going to bring them into the office.

You should take a candidate visit seriously; you're looking for a new family member. Even before they show up, you make sure you have a representative sample of the team in the candidate's interview schedule. At the very least, you need to make sure that you have someone to drill into their technical abilities, someone to deal with their ability to deliver as part of a team and someone to make sure that they're going to be a cultural fit with the company as a whole. Each of these interview types is different and deserves it's own description.

Future Posts in This Series

Tune in to future posts in this series where we’ll be discussing:

0 comments




Head of Google interviewing says “results matter, riddles don’t”

googleGoogle, like Microsoft, is famous for asking brain-teaser style questions during their interviews. However, in a June, 2013 interview with the New York Times, Laszlo Bock, the Sr. VP of HR for Google, said that

“[B]ainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

In another interview, Bock said that when putting together a resume, focus on what you did in relation to the expectations:

“The key is to frame your strengths as: ‘I accomplished X, relative to Y, by doing Z.’ Most people would write a résumé like this: ‘Wrote editorials for The New York Times.’ Better would be to say: ‘Had 50 op-eds published compared to average of 6 by most op-ed [writers] as a result of providing deep insight into the following area for three years.’ Most people don’t put the right content on their résumés.”

Amen!

0 comments




Moving My ASP.NET Web Site to Disqus

I’m surprised how well that my commentRss proposal has been accepted in the world. As often as not, if I’m digging through an RSS feed for a site that supports comments, that site also provides a commentRss element for each item. When I proposed this element, my thinking was that I could make a comment on an item of interest, then check a box and I’d see async replies in my RSS client, thereby fostering discussion. Unfortunately, RSS clients never took the step of allowing me to subscribe to comments for a particular item and a standard protocol for adding a comment never emerged, which made it even less likely for RSS clients to add that check box. All in all, commentRss is a failed experiment.

Fostering Discussion in Blog Post Comments

However, the idea of posting comments to a blog post and subscribing to replies took off in another way. For example, Facebook does a very good job in fostering discussion on content posted to their site:

image

The Facebook supports comments and discussions nicely

 

Not only does Facebook provide a nice UI for comments, but as I reply to comments that others have made, they’re notified. In fact, as I was taking the screenshot above, I replied to Craig’s comment and within a minute he’d pressed the Like button, all because of the support Facebook has for reply notification.

However, Facebook commenting only works for Facebook content. I want the same kind of experience with my own site’s content. For a long time, I had my own custom commenting system, but the bulk of the functionality was around keeping spam down, which was a huge problem. I recently dumped my comments to an XML format and of the 60MB of output, less than 8MB were actual comments – more than 80% was comment spam. I tried added reCAPTCHA and eventually email approval of all comments, but none of that fostered the back-and-forth discussions over time because I didn’t have notifications. Of course, to implement notifications, you need user accounts with email verification, which was a whole other set of features that I just never got around to implementing. And even if I did, I would have taken me a lot more effort to get to the level of quality that Disqus provides.

Integrating Disqus Into Your Web Site

Disqus provides a service that lets me import, export and manage comments for my site’s blog posts, the UI for my web site to collect and display comments and the notification system that fosters discussions. And they watch for spam, too. Here’s what it looks like on a recent post on my site:

image

The Disqus services provides a discussion UI for your web site

 

Not only does Disqus provide the UI for comments, but it also provides the account management so that commenters can have icons and get notifications. With the settings to allow for guest posting, the barrier to entry for the reader that wants to leave a comment is zero. Adding the code to enable it on your site isn’t zero, but it’s pretty close. Once you’ve established a free account on disqus.com, you can simply create a forum for your site and drop in some boilerplate code. Here’s what I added to my MVC view for a post’s detail page to get the discussion section above:

<%-- Details.aspx –%>
...
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
  <!-- post –>
  ...
  <h1><%= Model.Post.Title %></h1>
  <div><%= Model.Post.Content %></div>
  <!-- comments -->
  <div id="disqus_thread"></div>
  <script type="text/javascript">
    var disqus_shortname = "YOUR-DISQUS-SITE-SHORTNAME-HERE";
    var disqus_identifier = <%= Model.Post.Id %>;
    var disqus_title = "<%= Model.Post.Title %>";

    /* * * DON'T EDIT BELOW THIS LINE * * */
    (function () {
      var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
      dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
      (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
    })();
  </script>
</asp:Content>

The discussion section for any post is just a div with the id set to “disqus_thread”. The code is from the useful-but-difficult-to-find Disqus universal embed code docs. The JavaScript added to the end of the page creates a Disqus discussion control in the div you provide using the JS variables defined at the top of the code. The only JS variable that’s required is the disqus_shortname, which defines the Disqus data source for your comments. The disqus_identifier is a unique ID associated with the post. If this isn’t provided, the URL for the page the browser is currently showing will be used, but that doesn’t work for development mode from localhost or if the comments are hosted on multiple sites, e.g. a staging server and a production server, so I recommend setting disqus_identifier explicitly. The disqus_title will likewise be taken from the current page’s title, but it’s better to set it yourself to make sure it’s what you want.

And that’s it. Instead of tuning your UI in the JS code, you do so in the settings on disqus.com yourself an includes things like the default order of comments, the color scheme, how much moderation you want, etc.

There’s one more page on your site where you’ll want to integrate Disqus: the page the provides the list of posts along with the comment link and comment count:

image

Disqus will add comment count to your comment lists, too

 

Adding support for the comment count is similar to adding support for the discussion itself:

<%-- Index.aspx --%>
...
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
  ...
  <!-- post -->
  <h2><%= Html.ActionLink(post.Title, "Details", "Posts", new { id = post.Id }, null) %></h2>
  <p><%= post.Content %></p>
  <!-- comment link --> 
  <p><%= Html.ActionLink("0 comments", "Details", "Posts", null, null, "disqus_thread",
new RouteValueDictionary(
new { id = post.Id }),
new Dictionary<string, object>() { { "data-disqus-identifier", post.Id } }) %></p> ...
  <script type="text/javascript">
    // from: https://help.disqus.com/customer/portal/articles/565624
    var disqus_shortname = "sellsbrothers";

    /* * * DON'T EDIT BELOW THIS LINE * * */
    (function () {
      var s = document.createElement('script'); s.async = true;
      s.type = 'text/javascript';
      s.src = 'http://' + disqus_shortname + '.disqus.com/count.js';
      (document.getElementsByTagName('HEAD')[0] || document.getElementsByTagName('BODY')[0]).appendChild(s);
    }());
  </script>
</asp:Content>

Again, this code is largely boilerplate and comes from the Disqus comment count docs. The call to Html.ActionLink is just a fancy way to get an A tag with an href of the following format:

<a href="/Posts/Details/<<POST-ID>>#disqus_thread" data-disqus-identifier="<<POST-ID>>">0 comments</a>

The “disqus_thread” tag at the end of the href does two things. The first is that it provides a link to the discussion portion of the details page so that the reader can scroll directly to the comments after reading the post. The second is that it provides a tag for the Disqus JavaScript code to change the text content of the A tag to show the number of comments.

The “data-disqus-identifier” attribute sets the unique identifier for the post itself, just like the disqus_identifier JS variable we saw earlier.

The A tag text content that you provide will only be shown if Disqus does not yet know about that particular post, i.e. if there are no comments yet, then it will leave it alone. However, if Disqus does know about that post, it will replace the text content of the A tag as per your settings, which allow you to be specific about how you want 0, 1 and n comments to show up on your site; “0 comments”, “1 comment” and “{num} comments” are the defaults.

Importing Existing Comments into Disqus

At this point, your site is fully enabled for Disqus discussions and you can deploy. In the meantime, if you’ve got existing comments like I did, you can import them using Disqus’s implementation of the WordPress WXR format, which is essentially RSS 2.0 with embedded comments. The Disqus XML import docs describe the format and some important reminders. The two reminders they list are important enough to list again here:

 

The XML import docs do a good job showing what the XML format is as an example, but they only list one of the data size requirements. In my work, I found several undocumented limits as well:

Something else to keep in mind is that as part of the comment import process is that Disqus translates the XML data into JSON data, which makes sense. However, they report their errors in terms of the undocumented JSON data structure, which can be confusing as hell. For example, I kept getting a “missing or invalid message” error message along with the JSON version of what I thought the message was to which they were referring. The problem was that by “message”, Disqus didn’t mean “the JSON data packet for a particular comment,” they meant “the field called ‘message’ in our undocumented JSON format which is mapped from the comment_content element of the XML.” I went round and round with support on this until I figured that out. Hopefully I’ve saved future generations that trouble.

If you’re a fan of LINQPad or C#, you can see the script I used to pull the posts and comments out of my site’s SQL Server database (this assumes an Entity Framework mapping in a separate DLL, but you get the gist). The restrictions I mention above are encapsulated in this script.

Where Are We?

Even though my rssComments extension to the RSS protocol was a failed experiment, the web has figured out how to foster spam-free, interactive discussions with email notifications across web sites. The free Disqus service provides as implementation of this idea and it does so beautifully. I wish importing comments was as easy as integrating the code, but since I only had to do it once, the juice was more than worth the squeeze, as a dear Australian friend of mine likes to say. Enjoy!

0 comments




Moving My Site to Azure: DNS & SSL

This part 3 of a multi-part series on taking a real-world web site (mine) written to be hosted on an ISP (securewebs.com) and moving it to the cloud (Azure). The first two parts talked about moving my SQL Server instance to SQL Azure and getting my legacy ASP.NET MVC 2 code running inside of Visual Studio 2013 and published to Azure. In this installment, we’ll discuss how I configured DNS and SSL to work with my shiny new Azure web site.

Configuring DNS

Now that I have my site hosted on http://sellsbrothers.azurewebsites.net, I’d like to change my DNS entries for sellsbrothers.com and www.sellsbrothers.com to point to it. For some reason I don’t remember, I have my domain’s name servers pointed at microsoftonline.com and I used Office365 to manage them (it has something to do with my Office365 email account, but I’m not sure why that matters…). Anyway, in the Manage DNS section of the Office365 admin pages, there’s a place to enter various DNS record types. To start, I needed to add two CNAME records:

image

The CNAME records needed to be awarded an IP address by Azure

 

A CNAME record is an alias to some other name. In this case, we’re aliasing the awveryify.sellsbrothers.com FQDN (the Host name field is really just the part to the left of the domain name to which you’re adding records, sellsbrothers.com in this case). This awverify string is just a string that Azure needs to see before it will tell you the IP address that it’s assigned to you as a way to guarantee that you do, in fact, own the domain. The www host name maps to the Azure web site name, i.e. mapping www.sellsbrothers.com to sellsbrothers.azurewebsites.net. The other DNS record I need is an A record, which maps the main domain, i.e. sellsbrothers.com, to the Azure IP address, which I’ll have to add later once Azure tells me what it is.

After adding the awverify and www host names and waiting for the DNS changes to propagate (an hour or less in most cases), I fired up the configuration screen for my web site and chose the Manage Custom Domains dialog:

image

Finding the IP address to use in configuring your DNS name server from Azure

 

Azure provided the IP address after entering the www.sellsbrothers.com domain name. With this in hand, I needed to add the A record:

image

Adding the Azure IP address to my DNS name servers

 

An A record is the way to map a host name to an IP address. The use of the @ means the undecorated domain, so I’m mapping sellsbrothers.com to the IP address for sellsbrothers.azurewebsites.net.

Now, this works, but it’s not quite what I wanted. What I really want to do, and what the Azure docs hint at, is to simply have a set of CNAME records, including one that maps the base domain name, i.e. sellsbrothers.com, to sellsbrothers.azurewebsites.net directly and let DNS figure out what the IP address is. This would allow me to tear down my web server and set it up again, letting Azure assign whatever IP address it wanted and without me being required to update my DNS A record if I ever need to do that. However, while I should be able to enter a CNAME record with a @ host name, mapping it to the Azure web site domain name, Office365 the DNS management UI won’t let me do it and Office365 support wasn’t able to help.

However, even if my DNS records weren’t future-proofed the way I’d like them to be, they certainly worked and now both sellsbrothers.com and www.sellsbrothers.com mapped to my new Azure web site, which is where those names are pointing as I write this.

However, there was one more feature I needed before I was done ported my site to Azure: secure posting to my blog, which requires an SSL certificate.

Configuring Azure with SSL

Once I had my domain name flipped over, I had one more feature I needed for my Azure-hosted web site to be complete – I needed to be able to make posts to my blog. I implemented the AtomPub publishing protocol for my web site years ago, mostly because it was a protocol with which I was very familiar and because it was one that Windows Live Writer supports. To make sure that only I could post to my web site, I needed to make sure that my user name and password didn’t transmit in the clear. The easiest way to make that happen was to enable HTTPS on my site using an SSL certificate. Of course, Azure supports HTTPS and SSL and the interface to make this happen is simple:

image

Azure’s certificate update dialog for added an SSL cert to your web site

 

Azure requires a file in the PKCS #12 format (generally using the .pfx file extension), which can be a container of several security-related objects, including a certificate. All of this is fine and dandy except that when you go to purchase your SSL cert, you’re not likely to get the file in pfx format, but in X.509 format (.cer or .crt file format). To translate the .crt file into a .pfx file, you need to generate a Certificate Signing Request (.csr) file with the right options so that you keep the private key (.key) file around for the conversion. For a good overview of the various SSL-related file types, check out Kaushal Panday’s excellent blog post.

Now, to actual dig into the nitty gritty, first you’re going to have to choose an SSL provider. Personally, I’m a cheapskate and don’t do any ecommerce on my site, so my needs were modest. I got myself a RapidSSL cert from namecheap.com that only did domain validation for $11/year. After making my choice, the process went smoothly. To get started, you pay your money and upload a Certificate Signing Request (.crs file). I tried a couple different ways to get a csr file, but the one that worked the best was the openssl command line tool for Windows. With that tool installed and a command console (running in admin mode) at the ready, you can follow along with the Get a certificate using OpenSSL section of the Azure documentation on SSL certs and be in good shape.

Just one word of warning if you want to follow along with these instructions yourself: There’s a blurb in there about including intermediate certificates along with the cert for your site. For example, when I get my RapidSSL certificate, it came with a GeoTrust intermediate certificate. Due to a known issue, when I tried to include the GeoTrust cert in my chain of certificates, Azure would reject it. Just dropping that intermediate cert on the floor worked for me, but your mileage may vary.

Configuring for Windows Live Writer

Once I have my SSL uploaded to Azure, now I can configure WLW securely for my new Azure-hosted blog:

image

Adding a secure login for my Azure-hosted blog

 

You’ll notice that I use HTTPS as the protocol to let WLW know I’d like it to use encrypted traffic when it’s transmitting my user name and password. The important part of the rest of the configuration is just about what kind of protocol you’d like to use, which is AtomPub in my case:

image

Configuring WLW for the AtomPub Publishing protocol

 

If you’re interested in a WLW-compatible implementation of AtomPub written for ASP.NET, you can download the source to my site from github.

Where are we?

Getting your site moved to Azure from an ISP involves more than just making sure you can deploy your code – it also includes making sure your database will work in SQL Azure and configuring your DNS and SSL settings as appropriate for your site’s new home.

At this point, I’ve gotten a web site that’s running well in the cloud, but in the spirit of the cloud, I’ve also got an aging comment system that I replaced with Disqus, a cloud-hosted commenting system, which is the subject of my next post. Don’t miss it!

0 comments




Moving My Site to Azure: ASP.NET MVC 2

In our last episode, I talked about the joy and wonder that is moving my site’s ISP-hosted SQL Server instance to SQL Azure. Once I had the data moved over and the site flipped to using the new database, I needed to move the site itself over, which brought joy and wonder all it’s own.

Moving to Visual Studio 2013

I haven’t had to do any major updates to my site since 2010 using Visual Studio 2010. At that time, the state of the art was ASP.NET MVC 2 and Entity Framework 4, which is what I used. And the combination was a pleasant experience, letting me rebuild my site from scratch quickly and producing a site that ran like the wind. In fact, it still runs like the wind. Unfortunately, Visual Studio 2012 stopped supporting MVC 2 (and no surprise, Visual Studio 2013 didn’t add MVC 2 support back). When I tried to load my web site project into Visual Studio 2013, it complained:

image

This version of Visual Studio is unable to open the following projects

 

This error message lets me know that there’s a problem and the migration report provides a handy link to upgrade from MVC 2 to MVC 3. The steps aren’t too bad and there’s even a tool to help, but had I followed them, loading the new MVC 3 version of my project into Visual Studio 2013 would’ve given me another error with another migration report and a link to another web page, this time helping me move from MVC 3 to MVC 4 because VS2013 doesn’t support MVC 3, either. And so now I’m thinking, halfway up to my elbows in the move to MVC 3 that Visual Studio 2013 doesn’t like, that maybe there’s another way.

It’s not that there aren’t benefits to move to MVC 4, but that’s not even the latest version. In fact, Microsoft is currently working on two versions of ASP.NET, ASP.NET MVC 5 and ASP.NET v.Next. Even if I do move my site forward two version of MVC, I’ll still be two versions behind. Of course, the new versions have new tools and new features and can walk my dog for me, but by dropping old versions on the floor, I’d left with the choices of running old versions of Visual Studio side-by-side with new ones, upgrading to new versions of MVC just to run the latest version of VS (even if I don’t need any of the new MVC features) or saying “screw it” and just re-writing my web site from scratch. This last option might seem like what Microsoft wants me to do so that they can stop supporting the old versions of MVC, but what’s to stop me from moving to AWS, Linux and Node instead of to ASP.NET v.Next? The real danger of dropping the old versions on the floor; not that I’ll move over to another platform, because I’m an Microsoft fanboy and my MSDN Subscription gives me the OS and the tools for free, but that large paying customers say “screw it” and move their web sites to something that their tools are going to support for more than a few years.

Luckily for me, there is another way: I can cheat. It turns out that if I want to load my MVC 2 project inside of Visual Studio 2013, all I have to do is remove a GUID from the csproj file inside the ProjectTypeGuids element. The GUID in question is listed on step 9 of Microsoft’s guide for upgrading from MVC 2 to MVC 3:

image

Removing {F85E285D-A4E0-4152-9332-AB1D724D3325} from your MVC 2 project so it will load in Visual Studio 2013

 

By removing this GUID, I give up some of the productivity tools inside Visual Studio, like easily adding a new controller. However, I’m familiar enough with MVC 2 that I no longer need those tools and being able to actually load my project into the latest version of Visual Studio is more than worth it. Andrew Steele provides more details about this hack in his most excellent StackOverflow post.

Now, to get my MVC 2 project to actually build and run, I needed a copy of the MVC 2 assemblies, which I got from NuGet:

image

Adding the MVC 2 NuGet package to my project inside Visual Studio 2013

 

With these changes, I could build my MVC 2 project inside Visual Studio 2013 and run on my local box against my SQL Azure instance. Now I just need to get it up on Azure.

Moving to Azure

Publishing my MVC 2 site to Azure was matter of right-clicking on my project and choosing the Publish option:

image

Publishing a web site to Azure using the Solution Explorer’s Publish option inside Visual Studio 2013

 

Selecting the Windows Azure Web Sites as the target and filling in the appropriate credentials was all it took to get my site running on Azure. I did some battle with the “Error to use a section registered as allowDefinition='MachineToApplication' beyond application level” bug in Visual Studio, but the only real issue I had was that Azure seemed to need the “Precompile during publishing” option set or it wasn’t able to run my MVC 2 views when I surfed to them:

 

image

Setting the “Precompile during publishing” option for Azure to run my MVC 2 views

 

With that setting in place, my Azure site just ran at the Azure URL I had requested: http://sellsbrothers.azurewebsites.net.

Where are we?

I’m a fan of the direction of ASP.NET v.Next. The order of magnitude reduction in working set, the open source development and the use of NuGet to designate pieces of the framework that you want are all great things. My objection is that I don’t want to be forced to move forward to new versions of a framework if I don’t need the features. If I am forced, then that’s just churn in working code that’s bound to introduce bugs.

Tune in next time and we’ll discuss the fun I had configuring the DNS settings to make Azure the destination for sellsbrothers.com and to add SSL to enable secure login for posting articles via AtomPub and Windows Live Writer.

0 comments




Moving My Site to Azure: The Database

In a world where the cloud is not longer the wave of the future, but the reality of the present, it seems pretty clear that it’s time to move sellsbrothers.com from my free ISP hosting (thanks securewebs.com!) to the cloud, specially Microsoft’s Azure. Of course, I’ve had an Azure account since its inception, but there has been lots of work to streamline the Azure development process in the last two years, so now should be the ideal time to jump in and see how blue the waters really are.

As with any modern web property, I’ve got three tiers: presentation, service and database. Since the presentation tier uses server-side generated UI and it’s implementation is bundled together with the service tier, there are two big pieces to move – the ASP.NET site implementation and the SQL Server database instance. I decided to move the database first with the idea that once I got it hosted on Azure, I can simply flip the connection string to point the existing site to the new instance while I was doing the work to move the site separately.

Deploy Database To Windows Azure SQL Database from SSMS

The database for my site does what you’d expect – it keeps track of the posts I make (like this one), the images that go along with each post, the comments that people make on each post, the writing and talks I give (shown on the writing page), book errata, some details about the navigation of the site, etc. In SQL Server Management Studio (SSMS), it looks pretty much like you’d expect:

image

sellsbrothers.com loaded into SQL Server Management Studio

 

However, before moving to Azure SQL Server, I needed a SQL Azure instance to move the data to, so I fired up the Azure portal and created one:

image

Creating a new SQL Azure database

 

In this case, I chose to create a new SQL Azure instance on a new machine, which Azure will spin up for us in a minute of two (and hence the wonder and beauty that is the cloud). I choose the Quick Create option instead of the Import option because the Import option required me to provide a .bacpac file, which was something I wasn’t familiar with. After creating the SQL Server instance and the corresponding server, clicking on the new server name (di5fa5p2lg in this case) gave me the properties of that server, including the Manage URL:

image

SQL Azure database properties

 

If you click on the Manage URL, you will have a web interface for interacting with your SQL Azure server, but more importantly for this exercise, the FQDN is what I needed to plug into SSMS so that I can connect to that server. I’ll need that in a minute, because in the meantime, I’d discovered what looked like the killer feature for my needs in the 2014 edition of SSMS:

image

Deploy Database to Windows Azure Database in SSMS 2014

 

By right-clicking on the database on my ISP in SSMS and choosing Tasks, I had the Deploy Database To Windows Azure SQL Database option. I was so happy to choose this option and see the Deployment Settings screen of the Deploy Database dialog:

Untitled-2

SSMS Deploy Database dialog

 

Notice the Server connection is filled in with the name of my new SQL Server instance on Azure. It started blank and I filled it in by pushing the Connect button:

Untitled-7

SSMS Connect to Server dialog

 

The Server name field of the Connect to Server dialog is where the FQDN we pulled from the Manage URL field of Azure database server properties screen earlier and the credentials are the same as I set when I created the database. However, filling in this dialog for the first time gave me some trouble:

Untitled-8

SQL Azure: Cannot open server ‘foo’ requested by the login

 

SQL Azure is doing the right thing here to keep your databases secure by disabling access to any machine that’s not itself managed by Azure. To enable access from your client, look for the “Set up Windows Azure firewall rules for this IP address” option on the SQL database properties page in your Azure portal. You’ll end up with a server firewall rule that looks like the following (and that you may want to remove when you’re done with it):

image

SQL Azure server firewall rules

 

Once the firewall has been configured, filling in the connection properties and starting the database deployment from my ISP to Azure was when my hopes and dreams were crushed:

image

SSMS Deploy Database: Operation Failed

 

Clicking on the Error links all reported the same thing:

Untitled-4

Error validating element dt_checkoutobject: Deprecated feature ‘String literals as column aliases’ is not supported by SQL Azure

 

At this point, all I could think was “what the heck is dt_checkoutobject” (it’s something that Microsoft added to my database), what does it mean for to use string literals as column aliases (it’s a deprecated feature that SQL Azure doesn’t support) and why would Microsoft deprecate a feature that they used themselves on a stored proc that they snuck into my database?! Unfortunately, we’ll never know the answer to that last question. However, my righteous indignation went away as I dug into my schema and found several more features that SQL Azure doesn’t support that I put into my own schema (primarily it was the lack of clustered indexes for primary keys, which SQL Azure requires to keep replicas of your database in the cloud). Even worse, I found one table that listed errata for my books that didn’t have a primary key at all and because no one was keeping track of data integrity, all of the data was in that table twice (I can’t blame THAT on Microsoft : ).

And just in case you think you can get around these requirements and sneak your database into SQL Azure w/o the updates, manually importing your data using a bacpac file is even harder, since you now have to make the changes to your database before you can create the bacpac file and you have to upload the file to Azure’s blob storage, which requires a whole other tool that Microsoft doesn’t even provide.

Making your Database SQL Azure-compatible using Visual Studio

To make my SQL database compatible with SQL Azure required changing the schema for my database. Since I didn’t want to change the schema for a running database on my ISP, I ended up copying the database from my ISP onto my local machine and making my schema changes there. Getting to point of SQL Azure-compatibility, however, required me to have the details of which SQL constructs SQL Azure supported and didn’t support. Microsoft provides overview guidance on the limitations of SQL Azure, but it’s not like having an automated tool that can check every line of your SQL. Luckily, Microsoft provides such a tool built into Visual Studio.

To bring Microsoft’ SQL compiler to bear to check for SQL Azure compatibility requires using VS to create a SQL Server Database Project and then pointing it at the database you’d like to import from (which is the one copied to my local machine from my ISP in my case). After you’ve imported your database’s schema, doing a build will check your SQL for you. To get VS to check your SQL for Azure-compatibility, simply bring up the project settings and choose Windows Azure SQL Database as the Target platform:

image

Visual Studio 2014: Setting Database Project Target Platform

 

With this setting in place, compiling your project will tell you what’s wrong with your SQL from an Azure point-of-view. Once you’ve fixed your schema (which may require fixing your data, too), then you can generate a change script that updates your database in-place to make it Azure-compatible. For more details, check out Bill Gibson’s excellent article Migrating a Database to SQL Azure using SSDT.

The Connection String

Once the database has been deployed and tested (SSMS or the Manage URL are both good ways to test that your data is hosted the way you think it should be), then it’s merely a matter of changing the connection string to point to the SQL Azure instance. You can compose the connection string yourself or you can choose the “View connection strings for ADO.NET, ODBC, PHP and JDBC” option from your database properties page on Azure:

image

SQL Azure: Connection Strings

 

You’ll notice that while I blocked out some of the details of the connection string in my paranoia, that Azure itself is too paranoid to show the password; don’t forget to insert it yourself and to put it into a .config file that doesn’t make it into the SCCS.

Where are we?

In porting sellsbrothers.com from an ISP to Azure, I started with the database. The tools are there (nice tools, in fact), but you’ll need to make sure that your database schema is SQL Azure-compatible, which can take some doing. In the next installment, I’ll talk about how I moved the implementation of the site itself, which was not trivial, as it is implemented in ASP.NET MVC 2, which has been long abandoned by Microsoft.

If you’d like to check out the final implementation in advance of my next post, you help yourself to the sellsbrothers.com project on github. Enjoy.

0 comments




Bringing The Popular Tech Meetups to Portland

pdx-tech-meetup-logoI’ve been watching the Portland startup scene for years. However, in the last 12 months, it’s really started to take off, so when I had an opportunity to mentor at the recent Portland Startup Weekend, I was all over it. I got to do and see all kinds of wonderful things at PDXSW, but one of the best was meeting Thubten Comerford and Tyler Phillipi. Between the three of us, we’re bringing the very popular Tech Meetup conference format to Portland.

The idea of a Tech Meetup is meant to be focused on pure tech. In fact, at the largest of the Tech Meetups in New York (33,000 members strong!), they have a rule where it’s actually rude to ask about the business model. The Tech Meetups are tech for tech’s sake. If you’re in a company big or small or if you’re just playing, cool tech always has a place at the Portland Tech Meetup.

The format is simple and if you’re familiar with the way they do things in Boulder or Seattle, you’re already familiar with it. Starting on January 20th, 2014, every 3rd Monday at 6pm, we’ll open the doors for some networking time, providing free food and drink to grease the skids. At 7pm, we’ll start the tech presentation portion of the evening, which should be at least five tiny talks from tech presenters of all kinds. After the talks, we’ll wrap up around 8pm and then head to the local water hold for the debrief.

If this sounds interesting to you, sign up right now!

If you’d like to present, drop me a line!

If you’d like to sponsor, let Thubten know.

We’re very excited about bringing this successful event to Portland, so don’t be shy about jumping in; the water is fine…

0 comments




The Party Is Just Getting Started At Snapflow!

snapflow logoThis has been my first week at Snapflow and what a week it’s been! I’ve already spend a good part of two days with actual customers that are excited about using Snapflow to build their web and mobile applications, started on a technical spike for one of those apps to be delivered on our platform in February and found the local Hawaiian teriyaki place.

As Chief Technical Officer at Snapflow, I’ll have influence over internal technology direction and external outreach, help to build our suite of products as well as growing the engineering team and work to understand our customers and make sure that they’re happy.

Snapflow’s customers are enterprise verticals building web sites and mobile apps. They want to build apps with global reach and cloud scale, but they don’t want to manage VMs for their databases, custom logic and REST APIs. With Snapflow, they get to configure their data model, design their custom logic with workflow and the REST API falls naturally from that. Further, because Snapflow provides enterprise-grade services, customers get top notch tools, security at the granularity they need, multi-tenancy to deal with app variations across groups or geographies, and guaranteed uptime. Our customers can then build their client apps however we want, but so far it’s been overwhelmingly HTML-based, so you’ll soon see tools from us to support that even more.

Technology-wise, we’ve got an amazing mix of AWS-based cloud hosting, Mongo DB and .NET on the server-side with HTML5-based tools and apps using Angular, Bootstrap and Kendo UI on the client-side.

And we’re hiring! Snapflow has more work than we can do right now and we’d love your help. I had other choices when it came to my next adventure, but Snapflow is that rare combination of people, technology and opportunity that I just couldn’t pass up. It’s the hot enterprise startup in Portland. Come join the party!

0 comments




TypeScript Templates for Windows 8

imageAs soon as I saw Anders’ talk on TypeScript, I fell in love. If you’re not familiar with it, TypeScript adds a lot of necessary features to JavaScript to make it suitable for building real apps, while still “compiling down” to JavaScript to maintain JS’s single biggest advantage: ubiquity. Further, TypeScript has tooling inside Visual Studio so that it works nicely with a wide variety of Windows projects, including Win8/JS projects.

However, while Microsoft has made a nice Win8/TS sample available, there are currently no Visual Studio project templates for building my own apps. Luckily, it was easy enough to build some:

image

You’ll notice three new templates: TypeScript versions of Blank App, Fixed Layout App and Navigation App. All three projects generate code that acts the same, except the code is in TypeScript instead of JavaScript (although the JavaScript is generated and very visible to your inspection).

I didn’t build the Grid App or Split App templates yet, since there is a lot of code there. I also haven’t ported any of the item templates. Now that I have the Navigation App template done (which includes an empty Page Control), the Grid and Split and other item templates will all flow from there (eventually : ).

JavaScript Patterns to TypeScript Constructs

Moving JavaScript to TypeScript involves two major pieces: porting the code from JS patterns to TS language constructs and bringing in references to the types that are used.

The first step, moving from JS patterns to TS language constructs, largely involved modules, classes and functions. For example, the navigator.js file defines the PageControlNavigator class in the Application namespace using JS patterns:

// navigator.js
(function () {
  "use strict";

  var appView = Windows.UI.ViewManagement.ApplicationView;
  var nav = WinJS.Navigation;

  WinJS.Namespace.define("Application", {
    PageControlNavigator: WinJS.Class.define(
        // Define the constructor function for the PageControlNavigator.
        function PageControlNavigator(element, options) {
          ...
          Application.navigator = this;
}, { home: "", /// <field domElement="true" /> _element: null, _lastNavigationPromise: WinJS.Promise.as(), _lastViewstate: 0, // This is the currently loaded Page object. pageControl: { get: function () { ... } }, ... } ) }); })();

The common pattern for a module that contains private and public parts is to use a self-executing anonymous function (which wraps all the code in navigator.js) to make everything private and then to use helpers to expose public parts explicitly (like the use of WinJS.Namespace.define). Further, to define a class is a matter of gathering up a constructor function with a set of member properties and functions, which is what the WinJS.Class.define helper does. Finally, right in the middle of that is the exposing of a namespace-wide property called Application.navigator, which makes it available to anyone using the Application namespace.

TypeScript provides actual language constructs for these patterns:

///<reference path='../declare/declare.ts' />
// navigator.ts
module Application {
    "use strict";

    var appView = Windows.UI.ViewManagement.ApplicationView;
    var nav = WinJS.Navigation;

    export var navigator: PageControlNavigator;

    interface PageControl {
        getAnimationElements: () => Element;
        updateLayout: (
            element: Element,
            viewState: Windows.UI.ViewManagement.ApplicationViewState,
            lastViewstate: Windows.UI.ViewManagement.ApplicationViewState) => void;
    }

    export class PageControlNavigator {
        home: string = "";
        _element: Element = null;
        _lastNavigationPromise: WinJS.Promise = WinJS.Promise.as();
        _lastViewstate: Windows.UI.ViewManagement.ApplicationViewState;

        constructor (element, options) {
            ...
        }

        // This is the currently loaded Page object.
        get pageControl(): PageControl { ... }

        // ...
    }
}

In this TypeScript code, you’ll see the module, export and class keywords that define the elements we were defining via patterns before. Further, the use of the interface keyword lets you define a contract for an argument or variable that the TypeScript compiler can check for you as it generates the corresponding JavaScript. Finally, notice the use of the type annotations after a colon, e.g. the PageControlNavigator type on the exported navigator variable, to give the TypeScript compiler more information. All of these constructs help you to be explicit about what you’re defining, which helps you track down errors and gives you better Intellisense as you code.

As I mentioned, TypeScript provides syntax on top of JavaScript, the idea being that all JavaScript code is already TypeScript code. Further, the TypeScript compiles produces JavaScript as it’s output. When you’re editing a TypeScript file in Visual Studio, you can see the corresponding JavaScript, which helps experienced JavaScript programmers bootstrap their way to TypeScript.

image

You’ll notice in this screenshot that TypeScript introduces shortcut syntax for function objects. For example, the following code from home.js:

WinJS.UI.Pages.define("/pages/home/home.html", {
  ready: function (element, options) {
    // TODO: Initialize the page here.
  }
});

can be written in TypeScript as follows:

WinJS.UI.Pages.define("/pages/home/home.html", {
  ready: (element, options) => {
    // TODO: Initialize the page here.
  },
});

The use of the TypeScript lambda syntax is both shorter and works better when it comes to your intuition of what the “this” keyword means.

The other thing to notice about most .ts files is one or more reference lines at the top that look like this:

///<reference path='../../declare/declare.ts' />

This is the TS way to do “include” or “import” of code from other TS files, instead of relying on an HTML container to pull in the right files.

TypeScript Declarations

A lot of the work porting the Win8/JS templates to TypeScript was replacing the use of JS patterns with TS constructs (which, ironically, generated back the same JS code I started with), but an equal amount of the work was in building TypeScript declaration files (*.d.ts files). The idea of a declaration file is that the JavaScript community has created a large number of libraries, none of which have associated TypeScript type information. TypeScript allows you to augment the type information for existing JavaScript libraries, e.g. jQuery, Knockout, WinJS, etc., with external files called declaration files.

The WinJS sample I mentioned earlier (Encyclopedia) provides a number of declaration files that provide type information for the HTML DOM, the WinRT object model, jQuery and for WinJS. Unfortunately, the one for WinJS is far from complete, which meant that a lot of the work I did to get the Win8/TS templates compiling without warnings was augmenting that file. All of the declarations files needed to make the templates compiled as generated are provided in the “declare” folder, but I’m sure there are holes that you’re going to run into as you add your own code. Of course, the authoritative winjs.d.ts file is part of the TypeScript distribution, so I’ll work with the nice folks on the TypeScript codeplex project to get my changes merged in.

Installing the Win8/TS Templates

To get started using the Win8/TS templates I’ve built, you’ll first need to install the TypeScript plug-in for Visual Studio 2012. Currently these templates have been tested under TS 0.8.1.1 only and the generated .jsproj files have this path hard-coded in. The web-based HTML Application with TypeScript template uses some trick to do away with hard-coded paths that I have yet to figure out.

You can download the Win8/TS samples from here, extract the three folders (blankts, blankfixedts and navts) into your VS2012 JavaScript project template folder, e.g. C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ProjectTemplates\JavaScript\Windows Store\1033. Once the files are there, shutdown all instances of Visual Studio 2012 and execute “devenv.exe /InstallVSTemplates” as admin. If you have multiple copies of Visual Studio installed, make sure you’re executing the one for VS2012.

Once you’ve completed those steps successfully, you should see three new TypeScript-based templates as shown in the very first figure of this blog post. Enjoy.

0 comments




Windows 8 and Visual Studio 2012 and Data Visualization, Oh My!

This month is a big one for Microsoft developers. Windows 8 will be generally available in stores on a variety of form factors starting on 10/26, with the BUILD conference following closely in the last week of October. This on top of the Visual Studio 2012 RTM earlier this summer and a Windows Phone 8 release coming soon, and there's a lot going on if you're a Windows developer.

If you've read my previous editor's notes this year, you already know that Telerik takes Windows 8 and Visual Studio 2012 very seriously. As of 10/17, we've officially released our set of XAML and HTML controls for building Windows Store apps on Windows 8, including data visualization controls like charts, gauges and bullet graphs. These controls aren't just ports from old platforms, but controls that have been re-imagined for the touch-centric mobile devices that Windows 8 will be shipping on. In addition, we've updated JustCode to support Windows Store project types, JustDecompile to decompile Windows Store and C# 5.0 apps and our JustTrace profiler to target Windows Store apps. If you'd like to see what our amazing customers have already done with all of this great Windows Store support, check out our Showcase Gallery.

Further, as the modern UI style becomes more popular, we’re continuing to push touch and metro UI themes into almost all of our suites, including ASP.NET AJAX, WPF and Silverlight. Also, these platforms including WinForms get a huge new control this Q: the PivotGrid, providing you with cutting edge data visualization for your custom apps.

In this Q, we've focused on Windows Store apps, Visual Studio 2012 and data visualization for desktop, web and mobile app development across the board, but that's not all! We've added Coded UI support to WinForms and WPF controls, full SharePoint compliance to our ASP.NET AJAX controls and complete storage of all of your Visual Studio settings in the cloud to JustCode! Check out our webinars the week of 10/22 so you can see just exactly what's new in your favorite Telerik products.

Chris Sells
VP, Developer Tools
@csells

P.S. And while the Developer Tools division has been hard at work, so has the rest of Telerik. For example, we've recently welcomed Eric Lawrence into our family with his excellent Fiddler tool, and the folks in our agile project management division TeamPulse have introduced Kanban boards and integration with TFS 2012 in their latest version.

0 comments




Telerik’s evolving platform guidance for .NET developers

Telerik often gets questions from its customers about which of the multitude of app frameworks that Microsoft provides for .NET developers that they should pick. WinForms? WPF? Silverlight? ASP.NET? What’s the right solution for their problem? The answer is always the same: it depends.

Unfortunately, that’s not very helpful, so last year a set of the best and brightest that Telerik has to offer sat down and figured out just what it depends on and whether we could offer clear, concise guidance for our customers. The answer was “yes we could,” so we did that in 2011.

However, it’s been a busy year that’s included two major events in the life of a .NET developer: Silverlight desktop and web have been shelved and Windows 8 has been born. So, with that in mind, we’ve updated the platform guidance to take those two important changes to the .NET developer landscape into account; you can read all about it in the Telerik’s 2012 platform guidance for .NET developers.

Or, if you’re already familiar with the 2011 guidance, the rest of this post will be about what’s changed in 2012.

Desktop Application

Desktop applications represent the range of applications from those supporting internal information workers to those delighting consumers. These applications typically involve richly interactive interfaces, either for heavy-duty data management or entertainment. They key characteristic of desktop apps is the need to take advantage of the full range of native capabilities of the platform.

Ideal .NET Platform: WPF

WPF provides the ideal platform for building desktop apps. With mature, rich tooling provided in Visual Studio and Expression Blend, readily available components that address the full range of app styles, a large developer community and ClickOnce deployment, WPF gives the .NET developer all of the power of building “native” Windows software with a simple deployment model.

Key Advantages of WPF:

[Special Silverlight Guidance Note: Silverlight is also a good candidate for building desktop apps, sharing many of the same characteristics of WPF. While it seems clear that Microsoft will not release a major version beyond the recently released Silverlight 5, their commitment to 10 more years of support as well as continued 3rd party vendor support means that it’s a viable alternative for WPF development for new or existing Silverlight projects.]

Tablet Application

The use of tablets and touch-centric apps within companies is on the rise, and tablet sales are expected to double in 2012 (Gartner). Unlike their mobile smartphone counterparts, which frequently complement existing desktop apps, analysts see the potential for tablets to be more disruptive, replacing certain types of desktop apps in the enterprise. For .NET developers, it is important to address this trend and pick a Microsoft platform that will deliver the best tablet experience. Many platforms available from Microsoft can be used to build touch-enabled apps, even WinForms, but Microsoft is providing clear guidance for modern, touch-first apps with the arrival of Windows 8.

Microsoft’s Windows 8 introduces a new model for building touch-enabled, tablet friendly apps that are meant to be content-focused, easy to use with no documentation, touch-centric and tailored to the device. These apps will run in a new dedicated environment only available in Windows 8.

Since Microsoft is making it clear that Windows 8 is their ideal platform for tablet apps, the bigger question developers must answer is how to develop tablet apps. Tablet apps can be built with either XAML/.NET or HTML/JavaScript. Both approaches have access to the full capabilities of the device and share a common Windows Runtime API.

Ideal Tablet Platform: XAML and .NET

When building Windows 8 tablet apps, choosing between XAML/.NET and HTML/JS largely depends on the kinds of existing assets within an organization and the skills of the developers, but we recommend XAML and .NET for most tablet app development. Tablet apps built with XAML and .NET not only offer the familiar .NET programming paradigms (and tools) that have been popularized over years of .NET and XAML development, but a large amount of the code, assets and skills carry over to Windows Phone 8 (WP8) app development. In contrast, it is not possible to leverage HTML/JS assets if you’re also building apps for WP8.

If supporting WP8 is not a key consideration for your tablet development, then it is important to know that Microsoft has worked to ensure the capabilities, tooling and run-time performance for both XAML and HTML tablet apps is as close to identical as possible. At that point, your choice between the two options is about the past and future technology strategy of your organization, not the capabilities of the platform.

So while we primarily recommend XAML and .NET for tablet app development, here are key advantages to both approaches that should be considered:

Key Advantages of XAML for building Metro-style apps:

Key Advantages of HTML for building Metro-style apps:

[Game support note: Both Windows Phone and Windows 8 provide access to DirectX for building high-performance “twitch” games. This access is provided via .NET XNA in Windows Phone 7 and via native DirectX in Windows 8. If you are planning on building high-performance games for these Microsoft platforms, we suggest this third option .]

Where are we?

It’s clear that Silverlight is in no sense “dead.” At Telerik, we still sell a large number of licenses to Silverlight developers, although from an engineering point-of-view, we spend more time making sure we’re taking the best advantage we can of WPF. Also, even if we don’t recommend starting new desktop or web deployment projects in Silverlight, it’s still alive and well on Windows Phone 7 & 8 and it provides an excellent springboard into XAML development on Windows 8. If you think of Silverlight as one of Microsoft’s implementations of XAML, along with WPF and the Windows 8 support, you’ll have the right mindset to move your Silverlight web and desktop apps, developers, skills and assets forward to WPF on the desktop, Silverlight on the phone and XAML on Windows 8.

0 comments




Telerik Loves Windows 8 and Visual Studio 2012 RTMs!

win8vs2012Yesterday’s release of Visual Studio 2012 and Blend for Visual Studio 2012 marks the beginning of a new era. In some ways, VS2012 and Blend are incremental releases, adding even better support for building enterprise and consumer apps and services for the desktop and the web. However, in one very important way, the release of VS2012 and Blend, together with the release of Windows 8 earlier this month, signals a whole new focus for the platform – that of touch-centric tablets – and with it, a whole new way to package and distribute apps for the Windows operating system – the Windows Store.

If Windows 8 sells even half of what Windows 7 has sold (which seems low, considering the support for a great number of new form factors), then that will represent 300 million customers all looking for new Windows 8 apps in the Store. Currently, that Store holds about 500 apps and even if Microsoft increases that number to 5,000 by general availability in October, that’s far short of the 500,000 apps that similar app stores have. In short, Windows 8 is going to have lots of users and those users are going to want to buy lots of apps. This is, of course, why Visual Studio 2012 and Blend are so important – they’re the tools you can use to design, develop and package your app for the Store and tap into those hundreds of millions of customers. Make no mistake – Windows 8 represents nothing short of a reboot of the Windows developer ecosystem and Visual Studio 2012 and Blend are the keys to that ecosystem.

Windows 8, Visual Studio 2012 and Blend are important to Windows developers, which makes it important to Telerik customers. Because of that, we’ve been on the cutting edge here since the BUILD conference in September, releasing metro themes that first week and supporting the Beta and RCs in our tools and controls. And now I’m happy to announce that we fully support Windows 8, Visual Studio 2012 and Blend across nearly all of our Windows developer products. And not only do we support them, but we take special advantage of their unique features in our products, as you can read in the following posts:

Of course, this is just the beginning of the tablet and mobile era for Windows developers, so count on Telerik to continue to push into Windows 8 and Windows Phone 8 for building touch-centric apps for both the Windows Store and the Windows Phone Store, as well as continuing to push our products to meet your needs on the desktop and on the web. Telerik’s been right there through the last decade of Windows development and you can expect us to be there for the next decade.

Chris Sells
VP, Developer Tools
@csells

0 comments




2630 older posts       No newer posts