These days, people who are considered “fullstack” developers are considered to be unicorns. That is, it’s seemingly rare for new grad developers to be fullstack, and instead they are picking a development path and sticking with it. It wasn’t really like that 10-15 years ago. When I first started developing commercially I obviously got to learning ASP.NET (Around .NET 2-ish), but you better believe I had to learn HTML/CSS and the javascript “framework” of the day – jQuery. There was no “fullstack” or “front end” developers, you were just a “developer”.

Lately my javascript framework of choice has been Angular. I started with AngularJS 1.6 and took a bit of a break, but in the last couple of years I’ve been working with Angular all the way up to Angular 8. Even though Angular’s documentation is actually pretty good, there’s definitely been a few times where I feel I’ve cracked a chestnut of a problem and thought “I should really start an Angular blog to share this”. After all, that’s exactly how this blog started. In the dark days of .NET Core (before the first release even!), I was blogging here trying to help people running into the same issues as me.

And so, I’ve started Tutorials For Angular. I’m not going to profess to be a pro in Angular (Or even javascript), but I’ll be sharing various tips and tricks that I’ve run into in my front end development journey. Content is a bit light at the moment but I have a few long form articles in the pipeline on some really tricky stuff that stumped me when I got back into the Angular groove, so if that sounds like you, come on over and join in!

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Rate limiting web services and APIs is nothing new, but I would say over the past couple of years the “leaky bucket” strategy of rate limiting has rised in popularity to the point where it’s almost a defacto these days. The leaky bucket strategy is actually quite simple – made even easier by the fact that it’s name comes from a very physical and real world scenario.

What Is A Leaky Bucket Rate Limit?

Imagine a 4 litre physical bucket that has a hole in it. The hole leaks out 1 litre of water every minute. Now you can dump 4 litres of water in the bucket all at once just fine, but still it will flow out at 1 litre per minute until it’s empty. If you did fill it immediately, after 1 minute, you could make 1 request but then have to wait another minute. Or you could wait 2 minutes and make 2 at once etc.  The general idea behind using a leaky bucket scenario over something like “1 call per second” is that it allows you to “burst” through calls until the bucket is full, and then wait a period of time until the bucket drains. However even while the bucket is draining you can trickle in calls if required.

I generally see it applied on API’s that to complete a full operation may take 2 or 3 API calls, so limiting to 1 per second is pointless. But allowing a caller to burst through enough calls to complete their operation, then back off, is maybe a bit more realistic.

Implementing The Client

This article is going to talk about a leaky bucket “client”. So I’m the one calling a web service that has a leaky bucket rate limitation in place. In a subsequent article we will look at how we implement this on the server end.

Now I’m the first to admit, this is unlikely to win any awards and I typically shoot myself in the foot with threadsafe coding, but here’s my attempt at a leaky bucket client that for my purposes worked a treat, but it was just for a small personal project so your mileage may vary.

There is a bit to unpack there but I’ll do my best.

  • We have a class called BucketConfiguration which specifies how full the bucket can get, and how much it leaks.
  • Our main method is called “GainAccess” and this will be called each time we want to send a request.
  • We use a SemaphoreSlim just incase this is used in a threaded scenario so that we queue up calls and not get tangled up in our own mess
  • On the first call to gain access, we kick off a thread that is used to “empty” the bucket as we go.
  • Then we enter in a loop. If the current items on the queue is the max fill, then we just wait a second and try again.
  • When there is room on the queue, we pop our time on and return, thus gaining access.

Now I’ve used a queue here but you really don’t need to, it’s just helpful debugging which calls we are leaking etc. But really a blockingcollection or something similar is just as fine. Notice that we also kick off a thread to do our leaking. Because it’s done at a constant rate, we need a dedicated thread to be “dripping” out requests.

And finally, everything is async including our semaphore (If you are wondering why I didn’t just use the *lock* keyword, it can’t be used with async code). This means that we hopefully don’t jam up threads waiting to send requests. It’s not foolproof of course, but it’s better than hogging threads when we are essentially spinwaiting.

The Client In Action

I wrote a quick console application to show things in action. So for example :

Running this we get :

Makes sense. We do our run of 4 calls immediately, but then we have to back off to doing just 1 call every 5 seconds.

That’s pretty simple, but we can also handle complex scenarios such as leaking 2 requests instead of 1 every 5 seconds. We change our leaky bucket to :

And what do you know! We see our burst of 4 calls, then every 5 seconds we see us drop in another 2 at once.


Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion

OK, let’s just get the Wikipedia definitions out of the way to begin with :

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
  2. Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.

While this may sound complicated, it’s actually much easier when you are working in the code. Infact it’s probably something you already do in (Especially in .NET Core) but you don’t even think about.

In simple terms, imagine we have a service that calls into a “repository”. Generally speaking, that repository would have an interface and (especially in .NET Core) we would use dependency injection to inject that repository into the service, however the service is still working off the interface (an abstraction), instead of the concrete implementation. On top of this, the service itself doesn’t know anything about how the repository actually gets the data. The repository could be calling a SQL Server, Azure Table Storage, a file on disk etc – but to the service, it really doesn’t matter. It just knows that it can depend on the abstraction to get the data it needs and now worry about any dependencies the repository may have.

Dependency Inversion vs Dependency Injection

Very commonly, people mix up dependency injection and dependency inversion. I know many years ago in an interview I was asked in an interview to list out SOLID and I said Dependency Injection as the “D”.

Broadly speaking, Dependency Injection is a way to achieve Dependency Inversion. Like a tool to achieve the principle. While Dependency Inversion is simply stating that you should depend on abstractions, and that higher level modules should not worry about dependencies of the lower level modules, Dependency Injection is a way to achieve that by being able to inject dependencies.

Another way to think about it would be that you could achieve Dependency Inversion by simply making liberal use of the Factory Pattern to create lower level modules (Thus abstracting away any dependencies they have), and have them return interfaces (Thus the higher level module depends on abstractions and doesn’t even know what the concrete class behind the interface is).

Even using a Service Locator pattern – which arguably some might say is dependency injection, could be classed as dependency inversion because you aren’t worrying about how the lower level modules are created, you just call this service locator thingie and magically you get a nice abstraction.

Dependency Inversion In Practice

Let’s take a look at an example that doesn’t really exhibit good Dependency Inversion behaviour.

The problem with this code is that MyService very heavily relies on concrete implementation details of the PersonRepository. For example it’s required to pass it a connection string (So we are leaking out that it’s a SQL Server), and a connection timeout. The PersonRepository itself allows a “ConnectToDatabase” method which by itself is not terribly bad, but if it’s required to call “ConnectToDatabase” before we can actually call the “GetAllPeople” method, then again we haven’t managed to abstract away the implementation detail of this specifically being a SQL Server repository.

So let’s do some simple things to clean it up :

Very simple. I’ve created an interface called IPersonRepository that shields away implementation details. I’ve taken that repository and (not shown) used dependency injection to inject it into my service. This way my service doesn’t need to worry about connection strings or other constructor requirements. I also removed the “ConnectToDatabase” method from being public, the reason being my service shouldn’t worry about pre-requisites to get data. All it needs to know is that it calls “GetAllPeople” and it gets people.

Switching To A Factory Pattern

While writing this post, I realized that saying “Yeah so I use this magic thing called Dependency Injection” and it works isn’t that helpful. So let’s quickly write up a factory instead.

Obviously not as nice as using Dependency Injection, and there are a few different ways to cut this, but the main points of dependency inversion are still there. Notice that still, our service does not have to worry about implementation details like connection strings and it still depends on the interface abstraction.

What’s Next

That’s it! You’ve made it to the end of the series on SOLID principles in C#/.NET. Now go out and nail that job interview!

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion

The interface segregation principle can be a bit subjective at times, but the most common definition you will find out there is :

 No client should be forced to depend on methods it does not use

In simple terms, if you implement an interface in C# and have to throw NotImplementedExceptions you are probably doing something wrong.

While the above is the “definition”, what you will generally see this principle described as is “create multiple smaller interfaces instead of one large interface”. This is still correct, but that’s more of a means to achieve Interface Segregation rather than the principle itself.

Interface Segregation Principle In Practice

Let’s look at an example. If we can imagine that I have an interface called IRepository, and from this we are reading and writing from a database. For some data however, we are reading from a local XML file. This file is uneditable and so we can’t write to it (essentially readonly).

We still want to share an interface however because at some point we will move the data in the XML file to a database table, so it would be great if at that point we could just swap the XML repository for a database repository.

Our code is going to look a bit like this :

Makes sense. But as we can see for our XMLFileRepository, because we don’t allow writing, we have to throw an exception (An alternative would to just have an empty method which is probably even worse). Clearly we are violating the Interface Segregation Principle because we are implementing an interface whose methods we don’t actually implement. How could we implement this more efficiently?

In this example we have moved the reading portion of a repository to it’s own interface, and let the IRepository interface inherit from it. This means that any IRepository can also be cast and made into an IReadOnlyRepository. So if we swap out the XmlFileRepository for a DatabaseRepository, everything will run as per normal.

We can even inject in the DatabaseRepository into classes that are looking for IReadOnlyRepository. Very nice!

Avoiding Inheritance Traps

Some people prefer to not use interface inheritance as this can get messy fast, and instead have your concrete class implement multiple smaller interfaces :

This means you don’t get tangled up with a long line of inheritance, but it does have issues when you have a caller that wants to both read *and* write :

Ultimately it’s up to you but it’s just important to remember that these are just a means to an end. As long as a class is not forced to implement a method it doesn’t care about, then you are abiding by the Interface Segregation Principle.

What’s Next

Next is our final article in this series, the D in SOLID. Dependency Inversion.

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion

What Is The Liskov Principle?

The Liskov Principle has a simple definition, but a hard explanation. First, the definition :

Types can be replaced by their subtypes without altering the desirable properties of the program.

So basically if I have something like this :

If in the future I decide that MyService should depend on MySubType instead of MyType, theoretically I shouldn’t alter “the desirable properties of the program”. I put that in quotes because what does that actually mean? A large part of inheritance is extending functionality and therefore by definition it will alter the behaviour of the program in some way.

That last part might be controversial because I’ve seen plenty of examples of Liskov that state that overriding a method in a sub class (which is pretty common) to be a violation of the principle but I feel like in reality, that’s just going to happen and isn’t a design flaw. For example :

Is this a violation? It could be because substituting the subtype may not “break” functionality, but it certainly changes how the program functions and could lead to unintended consequences. Some would argue that Liskov is all about behavioural traits, which we will look at later, but for now let’s look at some “hard” breaks. e.g. Things that are black and white wrong.

Liskov On Method Signatures

The thing is, C# is actually very hard to violate Liskov and that’s why you might find that examples out there are very contrived. Indeed, for the black and white rules that we will talk about below, you’ll notice that they are things that you really need to go out of your way (e.g. ignore compiler warnings) to break. (And don’t worry, I’ll show you how!).

Our black and white rules that no one will argue with you on Stackoverflow about are….

A method of a subclass can accept a parent type as a parameter (Contravariance)

The “idea” of this is that a subclass can override a method and accept a more general parameter, but not the opposite. So for example in theory Liskov allows for something like this :

But actually if you try this, it will blow up because in C# when overriding a method the method signature must be exactly the same. Infact if we remove the override keyword it just acts as a method overload and both would be available to someone calling the MySubType class.

A method of a subclass can return a sub type as a parameter (Covariance)

So almost the opposite of the above, when we return a type from a method in a subclass, Liskov allows it to be “more” specific than the original return type. So again, theoretically this should be OK :

This actually compiles but we do get a warning :

Yikes. We can get rid of this error message by following the instruction and changing our SubType to the following :

Pretty nasty. I can honestly say in over a decade of using C#, I have never used the new  keyword to override a method like this. Never. And I feel like if you are having to do this, you should stop and think if there is another way to do it.

But the obvious issue now is that anything that was expecting an object back is now getting this weird “MyReturnType” back. Exceptions galore will soon follow. This is why it makes it to a hard rule because it’s highly likely your code will not compile doing this.

Liskov On Method Behaviours

Let’s jump into the murky waters on how changing the behaviour of a method might violate Liskov. As mentioned earlier, I find this a hard sell because the intention of overriding a method in C#, when the original is marked as virtual, is that you want to change it’s behaviour. It’s not that C# has been designed as an infallible language, but there are pros to inheritance.

Our behavioural rules are :

Eceptions that would not be thrown normally in the parent type can’t then be thrown in the sub type

To me this depends on your coding style but does make sense in some ways. Consider the following code :

If you have been using the original type for some time, and you are following the “Gotta catch ’em all” strategy of exceptions and trying to catch each and every exception, then you may be intending to catch the ArgumentException. But now when you switch to the subtype, it’s throwing a NullReferenceException which you weren’t expecting.

Makes sense, but the reality is that if you are overriding a method to change the behaviour in some way it’s an almost certainty that new exceptions will occur. I’m not particularly big on this rule, but I can see why it exists.

Pre-Conditions cannot be strengthened and Post-Conditions cannot be weakened in the sub type

Let’s look at pre-conditions first. The idea behind this is that if you were previously able to call a method with a particular input parameter, new “rules” should not be in place that now rejects those parameters. A trivial example might be :

Where previously there was no restriction on null values, there now is.

Conversely, we shouldn’t weaken the ruleset for returning objects. For example :

We could previously rely on the return object never being null, but now we may be returned a null object.

Similar to the exceptions rule, it makes sense that when extending behaviour we might have no choice but to change how we handle inputs and outputs. But unlike the exceptions rule, I really like this rule and try and abide by it.

Limiting General Behaviour Changes

I just want to quickly show another example that possibly violates the Liskov principle, but is up to interpretation. Take a look at the following code :

A somewhat trivial example but if someone is depending on the class of “Jar” and calling Pour, everything is going fine. But if they then switch to using JarWithLid, their code will no longer function as intended because they are required to open the Jar first.

In some ways, this is covered by the “pre-Conditions cannot be strengthened” rule. It’s clearly adding a pre-existing condition that the jar must be opened before it can be poured. But on the other hand, in a more complex example a method may be hundreds of lines of code and have various new behaviours that affect how the caller might interact with the object. We might classify some of them as “pre-conditions” but we may also just classify them as behavioural changes that are inline with typical inheritance scenarios.

Overall, I feel like Liskov principle should be about limiting “breaking” changes when swapping types for subtypes, whether that “breaking” change is an outright compiler error, logic issue, or unexpected behavioural change.

What’s Next?

Up next is the I in SOLID. That is, The Interface Segregation Principle.

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion

What Is The Open/Closed Principle?

Like our post on Single Responsibility, let’s start off with the Wikipedia definition of the Open/Closed Principle :

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

Probably not that helpful to be honest. It’s sort of an open ended statement that could be taken to mean different things for different developers.

In looking at resources for this article, I actually found many write ups talking about how being able to substitute one implementation of an interface for another was an example of being Open/Closed, and indeed Wikipedia touches on this :

During the 1990s, the open/closed principle became popularly redefined to refer to the use of abstracted interfaces, where the implementations can be changed and multiple implementations could be created and polymorphically substituted for each other.

I’m not the right person to start redefining decades of research into software design, but I personally don’t like this definition as I feel like it very closely resembles the Liskov Principle (Which we will talk about in our next article in this series). Instead I’ll describe it more loosely as :

Can I extend this code and add more functionality without modifying the original code in any way.

I’m going to take this explanation and run with it.

Open/Closed Principle In Practice

Let’s take our code example that we finished with from the Single Responsibility article. It ended up looking like this :

Looks OK. But what happens if we want to also check other usernames and reject the registration if they fail? We might end up with a statement like so :

Is this just continue on forever? Each time we are actually modifying the logic of our app and we are probably going to have to regression test that past use cases are still valid every time we edit this method.

So is there a way to refactor this class so that we are able to add functionality (Add blacklisted usernames for example), but not have to modify the logic every time?

How about this! We create an interface that provides a list of Blacklisted Usernames :

Then we inject this into our RegisterService class and modify our register method to simply check against this list.

Better right? Not only have we removed having hardcoded strings in our code (which is obviously beneficial), but now when we extend that list out of usernames we don’t want to allow, our code actually doesn’t change at all.

This is the Open/Closed Principle.

Extending Our Example

The thing is, we’ve taken a list of hardcoded strings and made them be more configuration driven. I mean yeah, we are demonstrating the principle correctly, but we are probably also just demonstrating that we aren’t a junior developer. Let’s try and take our example that one step further to make it a bit more “real world”.

What if instead of just checking against a blacklisted set of usernames, we have a set of “pre-registration checks”. These could grow over time and we don’t want to have a huge if/else statement littered through our method. For example changing our code to look like this would be a bad idea :

How many statements are we going to add like this? And each time we are subtly changing the logic for this method that requires even more regression testing.

How about this instead, let’s create an interface for registration pre checks and implement it like so :

Now we can modify our RegisterService class to instead take a list of pre checks and simply check all of them in our RegisterUser method :

Now if we add another pre-check, instead of changing the logic of our RegisterUser method, we just add it to the list of pre-checks passed into the constructor and there is no modification necessary. This is probably a better example of the Open/Closed Principle that demonstrates the ability to add functionality to a class that doesn’t require changes to the existing logic.

What’s Next?

Up next is the L in SOLID. That is, The Liskov Principle.

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion

What Is SOLID?

SOLID is a set of 5 design principles in software engineering intended to make software development more flexible, easier to maintain, and easier to understand. While sometimes seen as a “read up on this before my job interview” type of topic, SOLID actually has some great fundamentals for software design that I think is applicable at all levels. For example, I commonly tell my Intermediate developers that when doing a code review, don’t think about “How would I implement this?” but instead “Does this follow SOLID principles?”. Not that things are ever as black and white as that, but it forms a good starting point for looking at software design objectively.

Like all “patterns” and “principle” type articles. Nothing should be taken as being “do this or else”, but hopefully throughout this series we can take a look at the SOLID principles at a high level, and you can then take that and see how it might apply to your day to day programming life. First up is the “Single Responsibility Principle”.

What Is Single Responsibility?

I remember reading a definition of Single Responsibility as being :

A class should have only one reason to change

And all I could think was… And what is that reason? Was the quote cut off? Was there more to this?

And I don’t think I’m alone. I feel like this definition is a somewhat backwards way of saying what we probably all intuitively think of when we see “Single Responsibility”. But once we explore the full picture, it’s actually quite a succinct way of describing the principle. Let’s dive right in.

Single Responsibility In Practice

If we can, let’s ignore the above definition for a moment and look at how the single responsibility principle would actually work in practice.

Say I have a class that registers a new user on my website. The service might look a bit like so :

Pretty simple :

  • Check to see if the user is an admin, if it is throw an exception.
  • Open a connection to the database and insert that user into it.
  • Send a welcome email.

How does this violate the single responsibility principle? Well while theoretically this is doing “one” thing in that it registers a user, it’s also handling various parts of that process at a low level.

  • It’s controlling how we connect to a database, opening a database connection, and the SQL statement that goes along with it.
  • It’s controlling how the email is sent. Specifically that we are using a direct connection to an SMTP server and not using a third party API or similar to send an email.

It can become a bit of a “splitting hairs” type argument and I’ve intentionally left the logic statement for checking the username in there as a bit of a red herring. I mean theoretically you could get to the point where you go One Class -> One Method -> One Line. So you have to use your judgement and in my opinion, the admin check there is fine (But feel free to disagree!).

Another way to look at it is the saying “Are we doing one thing only, and doing that one thing well”. Well we are directing the flow of registering a user well, but we are also doing a bunch of database/SMTP work mixed in with it that frankly we probably aren’t doing “well”.

A Class Should Have Only One Reason To Change

Let’s get back to our original definition.

A class should have only one reason to change

If we take the above code, what things could come up that would cause us to have to rewrite/change this code?

  • The flow of registering a user changes
  • The restricted usernames change
  • The database schema/connection details change (Connection String, Table Schema, Statement changes to a Stored Proc etc)
  • We stop using a SQL database all together for storing users and instead use a hosted solution (Auth0, Azure B2C etc)
  • Our SMTP/Email details change (From Email, Subject, Email contents, SMTP server etc)
  • We use a third party API to send emails (Sendgrid for example)

So as we can see, there are things that could change that have absolutely nothing about registering a user. If we move to using a different method of sending emails, we are going to edit the “RegisterUser” method? Doesn’t make sense right!

While the statement of having only “one” reason to change might be extreme, we should work towards abstracting code so that changes unrelated to our core responsibility of registering users affects minimal change to our code. The only reason why our class should change is because the way in which users register changes. But changes to how we send emails or which database we decide to use shouldn’t change the RegisterService at all.

With that in mind, what we would typically end up with is something like so :

You probably already have code like this so it’s not going to be ground breaking stuff. But having split out code into separate classes, we are now on our way to abiding by the statement “A class should only have one reason to change”.

What Does Single Responsibility Give Us?

It’s all well and good throwing out names of patterns and principles, but what is the actual effect of Single Responsibility?

  • Code being broken apart with each class having a single purpose makes code easier to maintain, easier to read, and easier to modify.
  • Unit testing code suddenly becomes easier when a class is trying to only do one thing.
  • A side effect of this means that code re-useability jumps up as you are splitting code into succinct blocks that other classes can make use of.
  • The complexity of a single class often goes drastically down because you only have to worry about “one” piece of logic (For example how a user is registered, rather than also worrying about how an email is sent).

Simply put, splitting code up into smaller chunks where each piece just focuses on the one thing it’s supposed to be doing makes a developers life a hell of a lot easier.

What’s Next?

Up next is the O in SOLID. That is, The Open/Closed Principle.

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Similar to the Singleton and Mediator Patterns, the Factory Pattern is part of the “Gang Of Four” design patterns that basically became the bible of patterns in OOP Programming.

The Factory Pattern is a type of “Creational Pattern” that deals with the problem of creating an object when you aren’t quite sure on the “requirements” to create said object. That’s probably a little bit of a confusing way to explain it. But in general, think of it like a class that’s sole purpose when called, is to create an object and return it to you and you don’t need to know “how” the creation of that object actually happens.

The Factory Pattern In C#

Let’s drive right in. First, let’s look at a plain C# example in code :

Let’s walk through bit by bit.

First, we have an interface called IVehicle which is implemented by both the Car and Motorbike classes. We also have a vehicle enum type which we later use in our factory.

Next comes our actual factory. This has a single method called “Create” on it. I personally prefer my factories to only ever have a single method called “Create” just to keep them simple. It takes in a vehicle type and spits out an instance of a vehicle.

Now the point here is that the caller to the factory doesn’t need to know “how” we are creating these objects, just that it can call them and it will get back the “right” one. It doesn’t need to know that it can simply call “new Car()” to get a car, maybe the creation of a car is actually more complex than that, but it let’s the factory work it out. Let’s illustrate that further…

Illustrating The Abstraction

Let’s take our above example and imagine we add another vehicle type. But it’s a complex one. Infact, we are adding a “Quad Bike”.

Now we actually want our QuadBike to still return a MotorBike object, just with a few changes. For that, we change our Motorbike object to the following :

Now imagine if everyone who was already creating motorbikes now needed to change their creation to pass in false. But with our handy factory…

So we’ve actually completely changed how the creation of an object happens, but because it’s abstracted away behind a factory, we don’t need to worry about any of it (Well… except to change the factory itself). That’s essentially the factory in a nutshell. Let’s look at how .NET Core makes use of this pattern in a different way.

Factories In The .NET Core Service Collection

You may be here because you’ve seen the intellisense for “implementationFactory” popup when using .NET Core’s inbuilt dependency injection :

So in this context what does factory mean? Actually… The exact same thing as above.

Let’s say I have a service that has a simple constructor that takes one parameter. This is pretty common when you are using a third party library and don’t have much control over modifying a constructor to your specific needs.  Let’s say the service looks like so :

If we bound this service in our service collection like so :

That’s not gonna fly. The only constructor available has a boolean parameter, and there are no parameterless constructors to use. So we have two options, we can use some janky injection for the boolean value or we can create a factory to wrap the constructor.

Similar to the above, we can actually create a real factory that has a Create method that returns our object :

And we bind it like so :

Now whenever MyService is requested, our factory method will run. If creating a full on factory is a bit much, you can just create the Func on the fly in your AddTransient call :

The use case for a “factory” here can be a little different because you might have an actual need to constantly create the object using a custom method at runtime and that’s why you are binding a custom method to create your object, compared to the “pattern” of abstracting away your constructor, but both can be called the “Factory Pattern”.

When Should I Use The Factory Pattern?

Dependency Injection/Inversion has at times taken the wind out of the sails of the factory pattern. Loading up a constructor with dependencies isn’t such a big deal because the caller is typically depending on an interface, and dependency injection is taking care of the rest.

However, I do typically end up using a factory when the creation of an object is dependent on runtime variables/state and I want to contain this logic in a parameter. Aslong as you know what the factory pattern is and don’t try and “force” it anywhere it shouldn’t be, you will find it comes in useful a couple of times in ever .NET project.

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently made an appearance on an Adventures In .NET Podcast. The podcast is ran by who you may also know from the Javascript Jabber podcast (As well as a tonne of other really great programming podcasts!).

I was a bit of a last minute guest so a whole range of topics are discussed all the way from how I got into programming, how this blog started, and the things I’m most excited about in the .NET Core 3.0 release. You can check out the podcast from :

Or just use the player below!

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on creating Windows Services in .NET Core.

Part 1 – The “Microsoft” Way
Part 2 – The “Topshelf” Way
Part 3 – The “.NET Core Worker” Way

In our previous piece on creating Windows Services in .NET Core, we talked about how to do things the “Microsoft” way. What we found was that while it was simple to get up and running, it was much harder to debug our service. Infact I would say borderline impossible.

And that’s where Topshelf comes in. Topshelf is a .NET Standard library that takes a tonne of the hassle out of creating Windows Services in both .NET Framework and .NET Core. But rather than recite the sales pitch to you, let’s jump right in!


Similar to our “Microsoft” method, there is no Windows Service or Topshelf “Visual Studio Template”. We instead just create a regular old .NET Core console application.

Then from our Package Manager Console, we run the following to install the Topshelf libraries.

The Code

We essentially want to recreate our previous code sample to work with Topshelf. Here’s how we do that :

All rather simple.

We inherit from the “ServiceControl” class (Which isn’t actually needed but it just provides a good base for us to work off). We have to implement the two methods to start and stop, and we just log those methods as we did before.

In our Main method of program.cs, it’s actually really easy. We can just use the HostFactory.Run method to kick off our service with minimal effort :

Crazy simple. But that’s not the only thing HostFactory can do, for example I may want to say that when my service crashes, I want to restart the service after 10 seconds, give it a nicer name, and set it to start automatically.

I could go on and on but just take a look through the Topshelf documentation for some configuration options. Essentially anything you would normally have to painfully do through the windows command line, you can set in code :

Deploying Our Service

Same as before, we need to publish our app specifically for a Windows environment. From a command prompt, in your project directory, run the following :

Now we can check out the output directory at bin\Release\netcoreappX.X\win-x64\publish and we should find that we have a nice little exe waiting for us to be installed.

Previously we would use typical SC windows commands to install our service, but Topshelf utilizes it’s own command line parameters for installing as a service. Almost all configuration that you can do in code you can also do from the command line (Like setting recovery options, service name/description etc). You can check out the full documentation here :

For us, we are just going to do a bog standard simple install. So in our output directory, I’m going to run the following from the command line :

Where WindowsServiceExample.exe is my project output. All going well my service should be installed! I often find that even when setting the service to startup automatically, it doesn’t always happen. We can actually start the service from the command line after installation by running :

In deployment scenarios I often find I have to install the service, wait 10 seconds, then attempt to start it using the above and we are away laughing.

Debugging Our Service

So when doing things the “Microsoft” way, we ran into issues around debugging. Mostly that we had to either use command line flags, #IF DEBUG directives, or config values to first work out if we are even running inside a service or not. And then find hacky ways to try and emulate the service from a console app.

Well, that’s what Topshelf is for!

If we have our code open in Visual Studio and we just start debugging (e.g. Press F5), it actually emulates starting the service in a console window. We should be met with a message saying :

This does exactly what it says on the tin. It’s started our “service” and runs it in the background as if it was running as a Windows Service. We can set breakpoints as per normal and it will basically follow the same flow as if it was installed normally.

We can hit Ctrl+C and it will close our application, but not before running our “Stop” method of our service, allowing us to also debug our shutdown procedure if required. Compared to debug directives and config flags, this is a hell of a lot easier!

There is just one single gotcha to look out for. if you get a message similar to the following :

This means that the service you are trying to debug in Visual Studio is actually installed and running as a Windows Service on the same PC. If you stop the Windows Service from running (You don’t have to uninstall, just stop it), then you can debug as normal.

What’s Next

A helpful reader pointed out in the previous article that .NET Core actually has a completely different way of running Windows Services. It essentially utilizes the “Hosted Services” model that has been introduced into ASP.NET Core and allows them to run as Windows Services which is pretty nifty!

Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.