Using var for implicitly typed variables in C#

I wanted to spend some time talking about the “var” keyword in C# because I’ve heard a number of misconceptions about it since it’s been introduced.

The var keyword is really just syntactic sugar that allows C# to resolve the type of a variable at compile time.  For example, the code

int i = 0;

is equivalent to

var i = 0;

The type of variable “i” has simply been inferred from it’s assignment.  The right side of the assignment is an integer, so the compiler treats i as an int throughout the rest of the scope of the variable.

var really shines when using anonymous types.  For example, I could write code like this.

var person = new { FirstName = "William", LastName = "Riker" };
var numbers = Enumerable.Range(0, 100).Select((i) => new { Number = i, Square = i*i });

Using var with anonymous types allows us to assign them to a variable without explicitly defining a type for the variable.  There still is a type for the variable, but it is a class that is implemented by the compiler.

One misconception is that the var keyword itself is a “type”.  Another is that it causes the compiler to not need to know the type of the variable.  Both of these are untrue, and using var like this does not remove statically typed nature of the language.  This is why whenever you use a “var” statement, you must include an assignment.

For example, doing this wouldn’t work as the compiler would throw an error because it doesn’t know what type “j” is supposed to be.

var j;

Inversion of Control

I spent a few weeks earlier this year doing some research on Castle Windsor (a specific IOC container) to implement it into a solution at work. In this and future posts, I plan to discuss some of the things I’ve learned over the course of that research.

Inversion of control (IOC) is a way to link together classes with different dependencies at run time instead of explicitly creating the dependencies in code.  A good analogy to think of is an IOC container is like glue.  It can take classes built in code and combine them together to construct one or more objects.  It doesn’t really add functionality to the application, but instead binds and holds everything together.

Dependency injection and inversion of control often get confused and interchanged as terms.  I like to think of it this way.  Dependency injection is really an abstract design pattern used in a class by class basis. It allows other objects to be “injected” or passed into an individual class.  IOC is a pattern whereby a container takes a plethora of classes and combines them all together to construct some object tree from them all, thereby making dependency injection easier to fulfill.  The container for the IOC pattern is often structured  using a factory design pattern.

Imagine there are several classes all utilizing dependency injection.  Each class has 2 or 3 parameters.  Now, for every class that needs to be constructed for each parameter passed in, those also have parameters that need to be constructed and passed in.  This get’s complex fast.  Just think of it as a huge object tree that continues to get linked and put together. The more complex the program, the larger the number of dependencies to fulfill, and the more difficult the object tree becomes to construct.

This is where IOC steps in to help.  By relying on the IOC container, it can resolve those references and automatically create the objects for us after it has been properly configured.

IOC containers work in a 3 step process.

  1. Component registration, also known as installation in some frameworks
  2. Object resolution, also known as resolving, or constructing an object
  3. Disposal of objects

Here’s a brief description of each step.  I will post some more concrete examples of working with Castle Windsor later which should help these make more sense.

In the registration process a set of rules are defined which determine where a specific implementation of a class will map.  For example, let’s say we are using the FordTaurus example from my dependency injection post.  We could tell the container that whenever it encounters an Engine class, it should construct a V6Engine for it.

In the object resolution process, this is where most of the work for the container will go.  Typically this is one or more calls to have the container construct some object for the program.  The container will resolve any unknown classes and pass them in as appropriate as defined by the registration process.

The last step is to dispose of the objects.  This is either telling the container to release the object so it isn’t tracked anymore, or to simply do any clean up logic for the object itself.

The concept seems very abstract at first. In a future post, I will show an example and put this into action. Hopefully, you will be able to see the benefits and be able to try it out in your next project too.

Design Patterns: Dependency Injection

Dependency injection is one of the most useful patterns I have learned since becoming a professional developer.  In my opinion it makes code easier to read and enforces me to make reusable code with the ability to unit test at very granular level.

“Dependency injection” sounds really complicated, but the idea behind dependency injection is simple.

For any objects a particular class is dependent on, rather than creating them explicitly in that class, the dependencies should be injected by becoming parameters of the constructor or properties which are assignable after the object is created.

This is a somewhat simple example, let’s say we have a class named FordTaurus.  In every FordTaurus there is an engine(unless it’s in a junk yard). Let’s say I have a specific model of car and decide that it should have a V6 block in it. So when in the Ctor, I naturally assign the V6Engine to the EngineBlock property.

public class FordTaurus {
	public Engine EngineBlock { get; private set; }

	public FordTaurus() {
		EngineBlock = new V6Engine();

public class Engine { /* ... */ }		
public class V6Engine : Engine { /* ... */ }

While this does work, I’ve made my Taurus dependent on the Engine. Let’s say the V6Engine required a parameter, how would I pass that in to the V6Engine? I would have to pass it in to the FordTaurus class itself somehow.

Well using a dependency injection paradigm, I would change my code to be something like this.

public class FordTaurus {
	public Engine EngineBlock { get; private set; }

	public FordTaurus(Engine e) {
		EngineBlock = e;

Now that the dependency has been moved to be passed into the constructor, I no longer need to worry about what parameters are required to build the engine. I also have made the class more flexible as I could create a new engine say an Inline4Engine class and pass it into my FordTaurus class to use.

As dependencies are moved to the constructor of code, this can also tend to clutter up the constructor. However, if a class depends on too many other classes, it’s likely violating the single-responsibility principle for SOLID code.

A step further would be to use an interface to abstract the engine. Then I could use techniques such as mocking to unit test the FordTaurus without having to test any functionality in the Engine or it’s sub-classes. I’ll post on that at some future time.

Another common technique used with dependency injection is to inject the dependencies with an IOC (inversion of control) container such as Castle Windsor or Unity. These containers allow the dependencies to be injected at run time based on a set of predefined rules. Using these two patterns in conjunction can be extremely powerful. However, I will examine IOC containers in another post as well.

Running multiple windows services in a single process

Sometimes in a large business application, it’s convenient to have more than one windows service available to allow for certain pieces of the application to be managed separately.  This is possible to create in two ways.  The first is rather straight forward.  For every service to create, create a different windows service executable.

The second is still straight forward, but less common, and another useful tidbit I found while reading the MSDN documentation.  This is to make a single service executable which contains many services.  The advantage to this is that the application shares a same executable process.  This can reduce memory usage especially if the program does things such as cache data into memory.  Or, if certain services are dependent on others running, they can inspect whether the other is running using shared memory such as static variables.  I’m going to show you how to set this up.

When setting up the services, simply create a second service.  To wire it up, where your program runs, add both services to an array that gets passed to ServiceBase.Run().

ServiceBase[] ServicesToRun;
ServicesToRun = new ServiceBase[] 
    new MultiServiceI(), 
    new MultiServiceII()

It’s as simple as that!  Create an extra service installer for your new service and you’ll be able to run them both.  When you start both the services, notice the PID in the task manager, you’ll notice it’s the same.


To demonstrate that the services are actually sharing memory, in my “OnStart” method, I logged the service that is starting and the last service that started.  The strings have been change for the respective services.

Program.Log("Started MultiService I, Last Service Started = " + Program.LastService);
Program.LastService = "MultiService I";

The implementation of these two are quite simple in the Program class.

public static string LastService = "";
public static void Log(string logText)
    File.AppendAllLines("C:\\multiservice.log", new[] { logText });

So, after running the first service followed by the second service, I get this in my log file

Started MultiService I, Last Service Started = 
Started MultiService II, Last Service Started = MultiService I

As you can see, when I start the second service the name of the first service is output as the last service that was run.

I hope you find this useful.  You can find a full solution on my samples repository

Using Custom Commands with Windows Services and PowerShell

A feature I recently came across when reading through the MSDN documentation was the ability to implement “custom commands” for windows services.  After reading about it, I wish I would have known about it sooner.

A custom command is a way to essentially send a signal to a service that you want it to process a command.  The command can be any integer from 128 to 255, but has no parameters, and returns no value.

There are a few situations where I immediately thought this may be useful.

1. A service health check.  I’ve seen in several scenarios in the past where at definite intervals a windows service will do something to say it is still alive.  This could be used to immediately force the service to do a health check, log out to a file, etc.

2. A service wake up.  For some windows services, they will just poll some sort of queue to see if work is available.  If it is not they will sleep for a given amount of time.  While this works, if someone is trying to get something done, the service may not respond to the request for due to the wait time.  Implementing a command to wake up the service could have it check the queue for work to process immediately.

Implementing a custom command on a service is very simple.  In a service class that inherits from ServiceBase, simply override the OnCustomCommand method.

Here’s a very simple bit sample code I wrote to demonstrate this functionality.

public enum CustomCommands
    WakeUp = 128,

private void Log(string text)
    File.AppendAllLines("C:\\wakeupservice.log", new[] { string.Format("[{0:yyyy-MM-dd HH:mm:ss}] - {1}", DateTime.Now, text) });

protected override void OnCustomCommand(int command)
    switch ((CustomCommands)command)
        case CustomCommands.WakeUp:
            Log("Wake up!");

Now to actually execute the command.  This can be done through the ServiceController class, or if you prefer to do it through the command line, can be done through PowerShell as well.

(Get-Service "CustomCommandService").Start()
(Get-Service "CustomCommandService").ExecuteCommand(128)
(Get-Service "CustomCommandService").Stop()

My log file now shows the following:

[2013-07-19 23:34:31] - Started!
[2013-07-19 23:35:04] - Wake up!
[2013-07-19 23:35:12] - Stopped!

That’s all there is to this post.  I hope you find this useful!

A full solution is available on my BitBucket samples repository.

Debugging a Windows Services

Have you ever had this message when writing a windows service in Visual Studio?

Cannot start service from the command line or a debugger.  A Windows Service must first be installed (using installutil.exe) and then started with the ServerExplorer, Windows Services Administrative tool or the NET START command.

I was a bit surprised the first time I hit the debug button in Visual Studio and ended up having to manually attach to the service every time I wanted to debug it.  After doing this over and over again, I learned a nice little trick. I like to put inside a lot of windows services I write as it easily enables me to get past that pesky error and debug straight from the IDE.

// If the debugger is attached, the program runs inline rather
// than it being run as a service.
MyService serviceToRun = new MyService();

if (Debugger.IsAttached)

    while (Debugger.IsAttached)
    ServiceBase.Run(new[] { serviceToRun });

Essentially after the program executes, it checks if the debugger is attached.  If it is, rather than running as a service it calls directly into the service to execute and continually checks for the debugger to detach.  Once it has the program ends. The conditional compile directive ensures that this behavior will no longer be possible once a Release build is made.

The Start method is a public method that simply calls to the overriden OnStart method for the service.

You can see a full VS solution on my BitBucket repository.

Using Powershell to remotely configure servers: Part I

Powershell has been around for a while now, and has gradually improved in functionality with each new release.  Powershell in a nutshell is an answer to the shells on other operating systems, and provides a variety of unique functionality on top of it.

One of the most beneficial features has been the ability to create remote sessions to configure servers.  Microsoft has implemented this through WinRM/WinRS which is grounded in the WS-Management and CIM standards.  If WinRM is configured on a machine and remote shells are enabled, it’s fairly simple to do this. In this post, I’ll just show how to set it up.   You do not need to do this if you just plan on connecting to remote computers. Additionally, some newer versions of Windows / Server have this enabled by default now.

Open up a powershell prompt as an administrator on the PC / Server that you would like to remotely manage.  After you’ve done this simply run the command:


You will likely get a series of questions stating some of the various actions that are being taken to enable remoting and requesting you to allow them to happen.  Once the cmdlet is done, you’ll be able to connect with Powershell sessions to the computer if it’s trusted by the client.  If you would like to ignore all the prompts, use the “-Force” parameter to automatically accept any of them.  If it’s already set up, the cmdlet will simply fail and do nothing.  On some operating system versions, remoting is enabled by default.

If you’d like more help with the Enable-PSRemoting command simply type “Get-Help Enable-PSRemoting”.

See the samples at my BitBucket repository
WinRM Setup Samples