Jan 112015
 
The Void - That's where your test results have gone

The Void – That’s where your test results have gone

Recently, I have been writing integration tests that cover async functions. During this work, I have encountered a quirky pain (of my own making) that I wanted to document. The purpose of the documentation is two-fold. First, the shame and self-deprecation of calling myself out will help to reinforce this point in my mind, and second, you might be able to avoid this goof up yourself.

The spoiler and 5 second take-away: When writing any function that uses async in C#, you must NEVER return void. The async keyword means that you intend to return something upon which the caller will await. That’s it. Lesson over.

If you want to know how weird things can get if you do this wrong in integration testing, read on…

So, I wrote a function that asynchronously returns a Task of string. Something like this:

public class Foo
{
    public async Task GetAStringFromIO()
    {
        // This is, presumably, a task that will take some time, so
        // we're going to await it.
        return await GetTheString();
    }

    private static Task GetTheString()
    {
        return Task.Run(() =>
        {
            // Simulate some really slow IO
            Thread.Sleep(1000);
            return "A String";
        });
    }
}

…and to test that, I wrote something along these lines:

[TestClass]
public class MyTests
{
    [TestMethod]
    public async void CanGetTheStringFromIO()
    {
        var foo = new Foo();
        string myString = null;
        myString = await foo.GetAStringFromIO();
        Assert.IsNotNull(myString);
    }
}

To my annoyance and great dismay, the test didn’t seem to run and my breakpoint wasn’t hit. The test wasn’t shown in the TestExplorer window. No bueno.
TestExplorer When Running Function returning void
The problem here, quite obviously, is that the async test function is returning void. The test runner expects something back on which it can await, but I am giving it nothing. This causes the test runner to exit. I wish it were more obvious, but there is no success, error, or inconclusive result – just a Test Explorer window without my test function.

After correcting the test by simply changing the return type to Task, life is good again. The Test Explorer shows my test and I can set breakpoints which actually pause execution of the test:
TestExplorerWithTask

Again, when writing any function that uses async in C#, you must NEVER return void. The async keyword means that you intend to return something upon which the caller will await. It’s just that easy.

If you are interested in a deep dive on this topic, I would recommend that you check out this article by Stephen Cleary: http://msdn.microsoft.com/en-us/magazine/dn818493.aspx. He has published books, articles, and blog posts surrounding all things C# asynchronous programming.

Dec 082014
 

In my last post, I showed a way to generate unit tests for existing code where a pattern exists using .NET’s System.Reflection library. The post showed an executable that would take the paths to a dll and an output file as input parameters where this executable would then tear through the assembly, looking for string properties for which I wanted tests. The exe would generate stub code to ensure that I wouldn’t miss any of the string properties.

I was discussing this with a colleague who suggested that I consider generating a T4 text template within Visual Studio to do this same work.  I had never used the templates before, but have found them to be amazingly simple, yet very useful tools to have in the arsenal.  I was able to take the code from the exe that I described in the last post, incorporate it into a template, and have it generate the same test code in about 5 minutes.

The benefit to using T4 text templates is that they are meta-code that reside within your Visual Studio project.  The cool thing is that the meta code regenerates the output file from the template during builds and after edits to the template.  Because of this close relationship with code in their project, these templates can be used to auto-generate output files that respond to changes in the code.  This can be particularly handy if you are creating schema files, config files, tests, or other forms of output that need to be kept fresh as your code changes.

I took the project that I used in the last post and added the T4 text template.  To do this, I simply added a new Text Template:

AddNewTextTemplate

After adding the template, you can see the template in the Solution Explorer window.  You can also see the generated code file as a child of the text template, as shown here:

TemplateInSolutionExplorer

When you first generate a text template, it puts header information in the .tt file for you that you have to edit.  You’ll notice that code that is to be evaluated is placed inside of <# #> bracketing.  The text within these delimiters will not be shown in the output file, but will contain the code needed to output something useful to the file.

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>

You can see that the default output extension is .txt. I changed that to .cs. You can also see that the template is generated with assembly and import directives. Assembly is like adding a reference to a project and import is analogous to a using directive in a C# file. Since I already knew my dependencies from my prior project, I modified this information to look like this:

<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ output extension=".cs" #>
<#@ assembly name="System"#>
<#@ assembly name="System.Xml"#>
<#@ assembly name="System.Xml.Linq"#>
<#@ import namespace="System" #>
<#@ import namespace="System.Xml.Linq"#>
<#@ import namespace="System.Reflection" #>

This block is code that the template will use when run. It is the list of references (assembly tag) and usings (import tag) that the code I will add later will use.

Next I will add the using statement block that I want my output file to have. This text will not be inside the <# #> delimiters because I actually want to see this code in my output file. Note that this text is not the usings that the template will need to run. Since it is outside of the delimiters, it is seen as simple text to be output:

using System;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;

Next, since I was writing a unit test file generator, the namespace and class declaration needed to be added. These were in funky functions in executable in my prior post. These functions added “header” and “footer” information, for lack of better terminology. This was awkward, but here it is quite elegant because I am actually writing this one-time-output text as it will be seen in the output.

namespace MyProject.Tests
{
    [TestClass]
    public class MyTests
    {
    }
}

After this, I extracted the code from the executable I built in the prior post and put it into the template. You’ll notice the <#= stuff #> where I wanted to emit text generated from code into the output file. This is the final text template file:

<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ output extension=".cs" #>
<#@ assembly name="System"#>
<#@ assembly name="System.Xml"#>
<#@ assembly name="System.Xml.Linq"#>
<#@ import namespace="System" #>
<#@ import namespace="System.Xml.Linq"#>
<#@ import namespace="System.Reflection" #>
using System;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyProject.Tests
{
	[TestClass]
	public class MyTests
	{
		<#
		var a = Assembly.LoadFrom(@"c:\temp\ProjectToTest\ProjectToTest.dll");
        foreach (var type in a.GetTypes())
        {
            // get all public static properties of MyClass type
            var propertyInfos = type.GetProperties();

            // sort properties by name
            Array.Sort(propertyInfos,
                delegate(PropertyInfo propertyInfo1, PropertyInfo propertyInfo2)
                {
                    return propertyInfo1.Name.CompareTo(propertyInfo2.Name);
                });

            // write property names
            foreach (var propertyInfo in propertyInfos)
            {
                if (propertyInfo.PropertyType == typeof(string))
				{

		#>
[TestMethod]
		public void MyTests_<#=  type.Name.Replace("`", "") #>_<#= propertyInfo.Name #>()
		{
			// Put in code here for your test
		}
		<#
				}
            }
        } 
		#>
	}
}

If you’ll notice, the “[TestMethod]” indention looks a little nutty in the text template file. I found that I had to play a little to get the indention right. Also, you can see that I have hard-coded the path to the dll that I wanted to reflect. This could be made better via a project variable somehow.

After the final updates to the template file, Visual Studio automatically generated this output when I saved the template:

using System;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyProject.Tests
{
	[TestClass]
	public class MyTests
	{
		[TestMethod]
		public void MyTests_Class1_String1()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class1_String2()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class1_String3()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class1_String4()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class1_String5()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class1_String6()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String1()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String2()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String3()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String4()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String5()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class2_String6()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String1()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String2()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String3()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String4()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String5()
		{
			// Put in code here for your test
		}
		[TestMethod]
		public void MyTests_Class3_String6()
		{
			// Put in code here for your test
		}
			}
}

If I had some severely repetitious checks to make, I could make this file regenerate and change with every change to my code. This is interesting for unit testing, but really has cool applications in other areas, like schema generation, documentation, etc.

Much thanks to my colleague for pointing T4 text templates out to me! This is another cool tool to keep handy.

Nov 202014
 

There are times when you need to write unit tests and it is important to ensure that you get full code coverage.  That first sentence alone will tip you off to the fact that this is not a TDD scenario, but rather a situation where you need to create test cases for pre-existing code.  It is easy to overlook methods or properties when backfilling tests, so it would be nice to have a tool to help in these situations.  Thankfully, you can use .NET reflection to trawl through your assemblies, writing stub code, looking for the properties and/or methods that you have interest in and making sure that you don’t overlook important items.

I have written a quick example to illustrate how useful reflection can be in auto-generating tests and test stubs. The point here is not that I have written the be-all-end-all test generator, but to show that you can quickly cover some ground in brownfield testing with the proper application of the System.Reflection library.

In this quick example, I have 3 classes that each have some string properties:

namespace AssemblyToTest
{
    public class Class1
    {

        public string String1 { get; set; }
        public string String2 { get; set; }
        public string String3 { get; set; }
        public string String4 { get; set; }
        public string String5 { get; set; }
        public string String6 { get; set; }

    }
    
    public class Class2
    {
        public string String1 { get; set; }
        public string String2 { get; set; }
        public string String3 { get; set; }
        public string String4 { get; set; }
        public string String5 { get; set; }
        public string String6 { get; set; }
    }

    public class Class3
    {
        public string String1 { get; set; }
        public string String2 { get; set; }
        public string String3 { get; set; }
        public string String4 { get; set; }
        public string String5 { get; set; }
        public string String6 { get; set; }
    }
}

Let’s say that you’re interested in looking for all public string properties in classes in your assembly to ensure that you have proper string validation.  In TDD, you would have already written the tests before writing the code, so problem solved!  However, in many cases, you may find yourself retrofitting old code with new tests.  In this example, you can use reflection to load the assembly in which you are interested, loop through all of the types in the assembly, and then loop through the properties of each type.  Armed with this information, you can then use a writer to generate a unit test code file, complete with stubs.  You could even generate some of the test code, if you have enough domain information.

Check out this simple program for generating stub tests:

using System;
using System.IO;
using System.Linq;
using System.Reflection;  // reflection namespace

namespace UnitTestGenerator
{
    /// <summary>
    /// This program returns/generates a list of all properties for all classes in the given assembly that are of type string.  
    /// This program borrows from code found at the following sites:
    /// http://www.csharp-examples.net/reflection-property-names/
    /// http://stackoverflow.com/questions/1315665/c-list-all-classes-in-assembly
    /// http://msdn.microsoft.com/en-us/library/system.reflection.assemblyname(v=vs.110).aspx
    /// http://msdn.microsoft.com/en-us/library/1009fa28(v=vs.110).aspx
    /// http://stackoverflow.com/questions/3723934/using-propertyinfo-to-find-out-the-property-type
    /// </summary>
    /// 
    /// This program generates unit test stubs for all of the string properties of all of the types in an assembly.  It will need to 
    /// be modified to specifically generate the test stubs desired.  
    /// Argument 0 = Path to the assembly
    /// Argument 1 = Path and file name for output file
    class Program
    {
        static void Main(string[] args)
        {
            if (!ValidateArguments(args)){return;}

            using (var sw = File.CreateText(args[1]))
            {
                WriteUsingBlock(sw);
                WriteClassHeader(sw);
              
                var a = Assembly.LoadFrom(args[0]);
                foreach (var type in a.GetTypes())
                {
                    // get all public static properties of MyClass type
                    var propertyInfos = type.GetProperties();

                    // sort properties by name
                    Array.Sort(propertyInfos,
                        delegate(PropertyInfo propertyInfo1, PropertyInfo propertyInfo2)
                        {
                            return propertyInfo1.Name.CompareTo(propertyInfo2.Name);
                        });

                    // write property names
                    foreach (var propertyInfo in propertyInfos)
                    {
                        if (propertyInfo.PropertyType == typeof(string))
                            WriteTestForProperty(sw, type.Name.Replace("`", ""), propertyInfo.Name);
                    }
                }

                WriteClassFooter(sw);
            }
        }

        private static void WriteUsingBlock(TextWriter sw)
        {
            sw.WriteLine("using System;");
            sw.WriteLine("using System.Linq;");
            sw.WriteLine("using Microsoft.VisualStudio.TestTools.UnitTesting;");
            sw.WriteLine("");
        }

        private static void WriteClassHeader(TextWriter sw)
        {
            sw.WriteLine("namespace MyProject.Tests");
            sw.WriteLine("{");
            sw.WriteLine("    [TestClass]");
            sw.WriteLine("    public class MyTests");
            sw.WriteLine("    {");    
        }

        private static void WriteClassFooter(TextWriter sw)
        {
            sw.WriteLine("    }");
            sw.WriteLine("}");
        }

        private static void WriteTestForProperty(TextWriter sw, string typeName, string propertyName)
        {
            sw.WriteLine("      [TestMethod]");
            sw.WriteLine("      public void MyTests_{0}_{1}()", typeName, propertyName);
            sw.WriteLine("      {");
            sw.WriteLine("          // Put in code here for your test");
            sw.WriteLine("      }");
            sw.WriteLine("");
        }

        private static bool ValidateArguments(string[] args)
        {
            if (!args.Any() || args[0] == "/?" || args[0] == "/h" || args[0] == "/help" || args.Count() != 2)
            {
                Console.WriteLine("GenerateUnitTests takes 2 arguments.  The first is the dll for which tests will be created and the second is the output file.");
                Console.WriteLine("Usage: GenerateUnitTests <dll file path> <output file>");
                return false;
            }

            return true;
        }

    }
}

After running the generator code, you will get stub code that looks like this:

using System;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyProject.Tests
{
    [TestClass]
    public class MyTests
    {
      [TestMethod]
      public void MyTests_Class1_String1()
      {
          // Put in code here for your test
      }

      [TestMethod]
      public void MyTests_Class1_String2()
      {
          // Put in code here for your test
      }

      [TestMethod]
      public void MyTests_Class1_String3()
      {
          // Put in code here for your test
      }

      [TestMethod]
      public void MyTests_Class1_String4()
      {
          // Put in code here for your test
      }

      // The generator created many more tests, but I have 
      // omitted them for the sake of brevity.

This is just stub code from the example, but you could expand the generator to fit your testing needs.

TDD is definitely the desired way to generate greenfield code.  The benefits of focus alone prove that out.  However, when you have to retrofit tests for when refactoring or generating tests on existing code, using .NET’s reflection capability will save an enormous amount of “pattern pounding” and it will make sure you are thorough!

Apr 302014
 

Don’t Repeat Yourself, or DRY, is a key principle of software engineering.  Various sites on the web and various courses state this principle as

“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”

One of the things this means, practically, is that the abstractions that you use in your code should provide for reuse such that there is never a need to copy code.

Reuse of code can be accomplished by properly designing your objects (abstractions) such that they have a single responsibility.  For example, if you have several objects that communicate with a data layer, you should abstract the data layer away into a data access class that is then injected into the constructor of objects that need to converse with the data layer.  This provides for better data encapsulation and allows for better unit testing as you can create a fake for the data access object.  Another example of how you can reuse code is through the use of base classes that provide functionality that is common to a set of potentially derived classes.

I recently presented a course on best practices for Object Oriented Design at my company and ran across a very cool tool that was mentioned in a Pluralsight course by Steve Smith.  The tool is Atomiq by goldfinch.  It is a completely free tool that they make available (self-admitedly) to drum up business for their static analysis tool, Nitriq.  Atomiq searches your code and finds code duplication and similarities.  It provides statistics and and easy way to navigate to duplications within the code.  It provides a very nifty graphic to show where duplicates are and it even has a command line exe for which a threshold can be set for use with a continuous integration builder (very cool!).

I decided to test drive Atomiq using some old code to which I had access.  I knew this code would give Atomiq a run for its money as this code has been around for a decade and has been touched and tweaked and, well, defiled by many developers.  After letting Atomiq analyze the entire project, this is the wheel graphic that was generated by Atomiq:

Wheel Output From Atomiq For Solution with Lots of Repetition

Wheel Output From Atomiq For Solution with Lots of Repetition – (Class Names Removed to Protect the Innocent)

As I suspected, Atomiq found many sources of repetition in this solution.  Each line crossing this Spirograph looking picture represents an opportunity to refactor this code such that the repetition goes down and the maintainability goes up.

I had another project where I was playing around prototyping a couple ICommand objects for use in ESRI’s ArcMap.  In this project, the two commands were generated using the ArcMap ICommand generation tool in Microsoft Visual Studio.  Because the classes were auto-generated, the same base code was added to each.  For example, this code (along with a few other lines) was auto-generated into both classes:


#region COM Registration Function(s)
[ComRegisterFunction]
[ComVisible(false)]
static void RegisterFunction(Type registerType)
{
    // Required for ArcGIS Component Category Registrar support
    ArcGisCategoryRegistration(registerType);
}

[ComUnregisterFunction]
[ComVisible(false)]
static void UnregisterFunction(Type registerType)
{
    // Required for ArcGIS Component Category Registrar support
    ArcGisCategoryUnregistration(registerType);
}
#endregion

This duplicated code caused Atomiq to (rightly) generate this graph:

Atomiq Output Because of Duplicate Auto-Generated ICommand Template Code

Atomiq Output Because of Duplicate Auto-Generated ICommand Template Code

For this particular duplication, Atomiq found 65 lines that were similar.  Also, Atomiq showed this similarity view, that showed the same code side-by-side:

Atomiq Similarity View Showing ESRI ICommand Boilerplate

Atomiq Similarity View Showing ESRI ICommand Boilerplate

This is one example where Atomiq shows a great place to do some refactoring.  By pulling all of the ESRI ArcMap boilerplate code into a base class, we can move all of this duplicate code to one place and then create derived classes from this base class.  For this example, I just started by refactoring out a part of the similar code, as shown here:


using ESRI.ArcGIS.ADF.BaseClasses

// Base class from which all other ESRI ICommands should be derived for code reuse

[ComVisible(true)]
public abstract class MyBaseCommand : BaseCommand
{
    [ComRegisterFunction]
    [ComVisible(false)]
    static void RegisterFunction(Type registerType)
    {
        // Required for ArcGIS Component Category Registrar support
        ArcGisCategoryRegistration(registerType);
    }

    [ComUnregisterFunction]
    [ComVisible(false)]
    static void UnregisterFunction(Type registerType)
    {
        // Required for ArcGIS Component Category Registrar support
        ArcGisCategoryUnregistration(registerType);
    }

    // Move other common ICommand code here

}

After reanalyzing the code, the 65 similar lines dropped to 55:

After refactoring - Duplicate Lines Dropped from 65 to 55

After refactoring – Duplicate Lines Dropped from 65 to 55

By iterating in this same fashion, Atomiq can help to eliminate duplications and make this code more maintainable.

While there is no silver bullet, Atomiq is a very good (and very free) tool to use in making sure that you write code that complies with the DRY principle.  When coupled with the command line version for a continuous integration builder, this is a great way to improve code quality.

Aug 072013
 

 

Image of a Garmin G1000 Display taken 3000' MSL over south central Tennessee

Image of a Garmin G1000 Display taken 3000′ MSL over south central Tennessee

I consider myself both software developer and a pilot.  I was lucky enough to have the chance to earn my private pilot certificate in 2011.  I fly small, single engine aircraft and they aren’t that very sexy, but I think being up there looking down is one of the coolest things I’ll ever get to do.

Being a software guy who cares a great deal about user experience and usability, I really enjoyed flying aircraft equipped with the “glass cockpit” of the Garmin G1000.  Sure, I was also trained on the standard “six pack” of “steam gauges” (the old gauges with the dials), but have done most of my flying in planes with computerized avionics.  When you’re in command of a plane (even a small plane), you find quickly that you have a lot of duties.  You are required to simultaneously fly a prescribed heading and altitude, maintain a correct airspeed, watch for traffic, ensure that you are tuned to the correct communication and navigation frequencies (both primary and standby), watch your engine for anomalies, properly monitor fuel/oil mixture, properly trim control surfaces, communicate with ATC (air traffic control) and other aircraft, watch for weather issues, oh yeah – not run out of fuel, and try to squeeze in some sight seeing along the way.  When you are juggling the myriad duties in the air, you become very appreciative of the work that went into your aircraft’s flight avionics.

After flying for a while, I have distilled a few of the things that I have learned from interacting with flight systems into three simple lessons that can be applied to all software, regardless of the domain.

1.  Users Don’t Want to Hunt for Data

As you can see from the image in this post, pilots have to deal with an incredible amount of data.  In an instant, a pilot needs to have access to his airspeed, altitude, attitude, heading, comm frequencies, nav frequencies, transponder code, altimeter (barometric pressure) setting, waypoints, nearest airport, weather, engine temperature, oil pressure, fuel level, and other bits of information.  If a pilot had to dig through layers of user interface (UI) for that information, there would be no way that he could properly and safely pilot the plane.  In the old days, the answer to this was to distill the data to just the pieces critical to flight.  The answer then was the “six pack” of steam gauges.  From these, you could see your airspeed, attitude, altitude, turn coordinator, heading, and vertical speed indicator.

The standard "six pack" of guages in a Cessna Skyhawk 172 SP

The standard “six pack” of guages in a Cessna Skyhawk 172 SP

This smaller set of data kept many pilots safely in the right place for many years, but during those years, pilots were also fiddling with paper charts and approach plates and were calling back to ATC for weather updates.

The new glass cockpits have managed to improve on the tried-and-true analog gauges by skillfully presenting the pilot with data through an intuitive UI.  The user can gather any data necessary for flight at a glance and can view potential weather hazards on the secondary display, the multi-function display (MFD).  The pilot does have to interact with the system to enter headings, frequencies, squawk codes, and look up airport information, but the information is never more than a few intuitive entries away.

When considering the experience our users will have with the software that we write, we must consider how to get them the information that they need with as little pain as possible and without data over-saturation.  We might ask, what is crucial for the user to have at hand at all times (like airspeed and altitude)?  What can be called up on demand (like airport layouts)?  What is the quickest way to get the user to the data they need with the fewest interactions?  What can I infer about the user’s current behavior that will help me serve them up what they will need next more quickly?

 2. Interaction Must Be Intuitive

With both analog and digital interfaces, the pilot has to enter information into the aircraft.  On old systems, the pilot has to “dial in” the altimeter settings and frequencies.  On newer systems, the pilot still has to enter these values, but with a common set of controls.  Further, on new avionics, the number of knobs and switches are small, but their mode of use is expanded based on the current context of the display.  This allows for a simpler human-avionics interface, which help the pilot by limiting his choice of input device.  There is usually only one or two ways to enter data, which helps speed training and makes flying safer through a simple, straight-forward interface.

Another example of this simplification of input is my old, Windows Phone from 2004 versus my iPhone 5 from today.  My old Windows Phone from 10 years ago had a clunky interface.  It had menus and sub menus and a tiny screen and required a stylus just to navigate menus.  It was an attempt to force an old menu paradigm from a larger PC into a tiny screen form factor.  It was painful.  Fast forward to the iPhone 5 that I carry today with iOS7.  It is sleek.  It is intuitive.  The entire device has all of 4 buttons.  My interaction with the iPhone is much more enjoyable because I don’t have to carry around a stylus.  Menus are context-aware.  Apps are single-task specific.  I’m never more than a few gestures away from what I want.

No matter what the industry is for which we write our software, we have to construct the path of least resistance between users and their content.  It is our duty to the user to minimize the interaction they have to make with a machine to get what they need.

3.  Software Must Perform Properly Every Time

The week before my checkride with the FAA examiner for my pilot certificate, my instructor and I taxied to runway 36 at the Madison County Executive airport (MDQ).  I taxied into position, made my radio call, and firewalled the throttle.  By about 1000 feet down the runway, I knew something was wrong.  From my training, I instinctively called out the airspeed as it climbed to my rotation speed.  However, this time there was no airspeed to call out.  The G1000’s airspeed indicator had no reading at all.  With about 5000 feet of runway left, I made the split-second decision to abort the takeoff, cut the throttle and slowly apply the brakes.  My instructor and I decided to call off our flying until maintenance could look over the aircraft.  Later that day, when I returned to the airport, I was informed that the person who had rented the aircraft before me had forgotten to put the pitot tube (the tube that takes in air to determine airspeed) cover back on and the aircraft technician had found that a tiny bug had made its home within the tiny orifice.

This was a minor incident, of course, but it proves the point that people rely on systems to provide them with the data they need to accomplish their task.  I was counting on the software to properly tell me my speed and it didn’t, or couldn’t, so I wasn’t able to fly.  Thankfully, the software in the cockpit was designed to degrade gracefully, which is something we should build into all of our software systems.  I could have taken off, circled, and landed safely and the rest of the data provided by the system would have been available.

Aircraft systems are an extreme example of systems that require a high level of redundancy and availability.  The same can be said of other systems, like telecom switches and medical devices.  Regardless of the system, we should design our software such that it is of high quality, is highly resilient, and degrades gracefully when something goes wrong.

These are just three lessons that we should strive to apply to our creating software.  I’m sure that more could be gleaned, but if we were to follow only these three, our software would be more usable and robust and our customers would be the happy beneficiaries.

Jul 272013
 

My wife is a Certified Family Nurse Practitioner and she recently joined the medical staff at a hospital here in town.  As she started her work, I was amazed at the hoops that she had to go through just to practice medicine.  There was licensing with the state board of nursing and certification and verification with the hospital.  There was the accounting for all of her continuing education hours.  She had to prove that she had the education and experience to back her credentials.  In short, she had to prove that she was the professional she claimed to be.

While she was running the gauntlet of verification, it made me consider the last time I had a similar experience. While not even close to her experience, the closest that I could come to that level of scrutiny was my last job interview.  Obviously, the ramifications of a medical professional not being properly experienced and taught would would be much greater than that of me as a software developer, but it made me consider that perhaps we developers, too, should be held to a higher professional standard.

Merriam-Webster defines profession as

a calling requiring specialized knowledge and often long and intensive academic preparation

Most of us in the software development profession have had substantial training, ranging from on-the-job training up through doctorates in some computer science or information technologies field.  We have a set of specialized knowledge and have had long and intensive preparation, but that was for yesterday’s technology.  Just as new techniques in other professions are continually created and taught, perhaps we should continually be taught as new technologies emerge in our field.

As we consider how other professionals (in this case medical professionals) keep their skills up-to-date, there are a few things we might consider:

1.  Continuing Education Units

Every two years, medical professionals are required to accumulate a specified number of continuing education units (CEUs) by attending classes, seminars, and online courses to maintain their licensing and accreditation.  They must have many hours with several of these hours reflecting their specialization.  In like kind, what if we software developers were required to have a certain number of CEUs to maintain our jobs?  How would our profession be different if keeping up with the latest technology – even studying it at a high level – was a requirement for continued employment?  While CEUs may not be required of us as developers, we should be staying abreast of the latest in our field so that we know which tools are at our disposal.

There are many sources that we can use to meet the goal of continued education without even leaving home.  Sites like Pluralsight have an entire library of rich content on a gambit of software-related topics.  Channel9 and MSDN from Microsoft have courses and videos to cover their entire stack.  Oracle has a wealth of material available for learning Java.  There are also conferences that can be attended in-person or online, like Google I/O or Microsoft Build.  These are just a few sources from which we can get our “Software CEUs” and we should make it a point to utilize these and continually grow as professionals.

 2.  Employers Monitor Skills and Training

You can bet that when you go into the emergency room or under the surgeon’s knife that the hospital has THOROUGHLY vetted every person that touches you.  The threat of malpractice is so real and liabilities are so great that hospitals and practices are extremely picky as to whom they employ and they continually monitor these employees performance and training.  Likewise, we should take very seriously the people we hire and we should take their continual training seriously.  Companies and managers should provide incentive and reward employees who make life-long learning a priority.

3.  Credentials Matter

In the 2002 movie, “Catch Me If You Can”, Leonardo Decaprio plays a con-artist who poses with several aliases, including an airline pilot and a doctor.  In the doctor scene, he faked a diploma from Harvard and was welcomed into a hospital as a doctor with some humorous (and some not-so-humorous) consequences.  I have seen developers come and go in my career who were a lot like the character played by Decaprio in that movie.  They seemed to have the credentials to develop software, but their credentials were just not for real.  Hospitals work hard now to ensure that doctors and nurses are fully credentialed.  Perhaps we could apply this to the software development arena through proper degrees, professional organizations, and certifications.  I know there is some debate here as to whether these things are required to be a good developer, but perhaps we should consider these when we hire developers and as a way to increase our credibility as developers.

While there is not a perfect correlation between the medical field and the software development field, there are things we can learn from the medical profession that we might be able to apply to our advantage.  Anything that we can do to promote developer proficiency and higher quality software will only prove to move the field forward and improve our ability to deliver quality solutions.