Note, this tutorial is about hosting an ASP.NET Core web app as a windows service, specifically in .NET Core 3.

If you are looking to host a web app as a service in .NET Core 2, check out this other tutorial : Hosting An ASP.NET Core Web Application As A Windows Service In .NET Core 2

If you are looking to run a Windows Service as a “worker” or for background tasks, then you’ll want this tutorial : Creating Windows Services In .NET Core – Part 3 – The “.NET Core Worker” Way


This is actually somewhat of a duplicate of a previous post I did here. But that was using .NET Core 2+, and since then, things have changed quite a bit. Well… Enough that when I tried to follow my own tutorial recently I was wondering what the hell I was going on about when nothing worked for me this time around.

Why A Web App As A Windows Service

So this tutorial is about running a Web App as a Windows Service. Why would that ever be the case? Why would you not have a web app running under something like IIS? Or why a Windows Service specifically?

Well the answer to why not under IIS is that in some cases you may not have IIS on the machine. Or you may have IIS but it’s not set up to host .NET Core apps anyway. In these cases you can do what’s called a self contained deploy (Which we’ll talk about soon), where the web app runs basically as an exe that you can double click and suddenly you have a fully fledged web server up and running – and portable too.

For the latter, why a windows service? Well if we follow the above logic and we have an exe that we can just click to run, then a windows service just gives us the ability to run on startup, run in the “background” etc. I mean, that’s basically all windows services are right? Just the OS running apps on startup and in the background.

Running Our Web App As A Service

The first thing we need to do is make our app compile down to an EXE. Well.. We don’t have to but it makes things a heck of a lot easier. To do that, we just need to edit our csproj and add the OutputType of exe. It might end up looking like so :

<PropertyGroup>
  <TargetFramework>netcoreapp3.0</TargetFramework>
  <OutputType>Exe</OutputType>
</PropertyGroup>

In previous versions of .NET Core you had to install the package Microsoft.AspNetCore.Hosting.WindowsServices , however as of right now with .NET Core 3+, you instead need to use Microsoft.Extensions.Hosting.WindowsServices . I tried searching around for when the change happened, and why, and maybe information about differences but other than opening up the source code I couldn’t find much out there. For now, take my word on it. We need to install the following package into our Web App :

Install-Package Microsoft.Extensions.Hosting.WindowsServices

Now there is just a single line we need to edit. Inside program.cs, you should have a “CreateHostBuilder” method. You might already have some custom configuration going on, but you just need to tack onto the end “UseWindowsServices()”.

return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        }).UseWindowsService();

And that’s all the code changes required!

Deploying Our Service

… But we are obviously not done yet. We need to deploy our service right!

Open a command prompt as an Administrator, and run the following command in your project folder to publish your project :

dotnet publish -c Release

Next we can use standard Windows Service commands to install our EXE as a service. So move your command prompt to your output folder (Probably along the lines of C:\myproject\bin\Release\netcoreapp3.0\publish). And run something like so to install as a service :

sc create MyApplicationWindowsService binPath= myapplication.exe

Doing the full install is usually pretty annoying to do time and time again, so what I normally do is create an install.bat and uninstall.bat in the root of my project to run a set of commands to install/uninstall. A quick note when creating these files. Create them in something like Notepad++ to ensure that the file type is UTF8 *without BOM*. Otherwise you get all sorts of weird errors :

The contents of my install.bat file looks like :

sc create MyService binPath= %~dp0MyService.exe
sc failure MyService actions= restart/60000/restart/60000/""/60000 reset= 86400
sc start MyService
sc config MyService start=auto

Keep the weird %~dp0 character there as that tells the batch process the current directory (Weird I know!).

And the uninstall.bat :

sc stop MyService
timeout /t 5 /nobreak > NUL
sc delete MyService

Ensure these files are set to copy if newer in Visual Studio, and now when you publish your project, you only need to run the .bat files from an administrator command prompt and you are good to go!

Doing A Self Contained Deploy

We talked about it earlier that the entire reason for running the Web App as a Windows Service is so that we don’t have to install additional tools on the machine. But that only works if we are doing what’s called a “self contained” deploy. That means we deploy everything that the app requires to run right there in the publish folder rather than having to install the .NET Core runtime on the target machine.

All we need to do is run our dotnet release command with a few extra flags :

dotnet publish -c Release -r win-x64 --self-contained

This tells the .NET Core SDK that we want to release as self contained, and it’s for Windows.

Your output path will change from bin\Release\netcoreapp3.0\publish  to \bin\Release\netcoreapp3.0\win-x64\publish

You’ll also note the huge amount of files in this new output directory and the size in general of the folder. But when you think about it, yeah, we are deploying the entire runtime so it should be this large.

Content Root

The fact that .NET Core is open source literally saves hours of debugging every single time I work on a greenfields project, and this time around is no different. I took a quick look at the actual source code of what the call to UseWindowsService does here. What I noticed is that it sets the content root specifically for when it’s running under a Windows Service. I wondered how this would work if I was reading a local file from disk inside my app, while running as a Windows Service. Normally I would just write something like :

File.ReadAllText("myfile.json");

But… Obviously there is something special when running under a Windows Service context. So I tried it out and my API bombed. I had to check the Event Viewer on my machine and I found :

Exception Info: System.IO.FileNotFoundException: Could not find file 'C:\WINDOWS\system32\myfile.json'.

OK. So it looks like when running as a Windows Service, the “root” of my app thinks it’s inside System32. Oof. But, again, looking at the source code from Microsoft gave me the solution. I can simply use the same way they set the content root to load my file from the correct location :

File.ReadAllText(Path.Combine(AppContext.BaseDirectory, "myfile.json"));

And we are back up and running!

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Recently I’ve been working a lot in .NET Core 3.0 and 3.1 projects. Both upgrading existing 2.2 projects and a couple of new greenfields projects. The thing that I’ve had to do in each and every one is switch from using the new System.Text.Json package back to using Newtonsoft.Json.

In almost all of them I’ve actually tried to keep going with System.Text.Json, but in the existing projects I haven’t had time to switch out things like custom JsonConverters or Newtonsoft.Json specific attributes on my models.

In new projects, I always get to the point where I just know how to do it in Newtonsoft. And as much as I want to try this shiny new thing, I have my own deadlines which don’t quite allow me to fiddle about with new toys.

So if you’re in the same boat as me and just need to get something out the door. The first thing you need is to install the following Nuget package :

Install-Package Microsoft.AspNetCore.Mvc.NewtonsoftJson

Then you need to add a specific call to your IMVCBuilder. This will differ depending on how you have set up your project. If you are migrating from an existing project you’ll have a call to “AddMvc()” which you can then tack onto it like so :

services.AddMvc().AddNewtonsoftJson();

However in new .NET Core 3+ projects, you have a different set of calls replace MVC. So you’ll probably have one of the following :

services.AddControllers().AddNewtonsoftJson();
services.AddControllersWithViews().AddNewtonsoftJson();
services.AddRazorPages().AddNewtonsoftJson();

If this is an API you will likely have AddControllers, but depending on your project setup you could have the others also. Tacking on AddNewtonsoftJson()  to the end means it will “revert” back to using Newtonsoft over System.Text.Json

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently had the opportunity to start a Specflow/Selenium end to end testing project from scratch and my gosh it’s been fun. I’m one of those people that absolutely love unit tests and trying to “trick” the code with complicated scenarios. End to end testing with Selenium is like that but on steroids. Seeing the browser flash infront of you and motor through tests is an amazing feeling.

But in saying that. A key part of using Selenium is the “ChromeWebDriver”. It’s the tool that actually allows you to manipulate the Google Chrome browser through selenium. And let me tell you, there are a few headaches getting this set up that I really didn’t expect. Version errors, Not finding the right exe, Nuget packages that actually include the exe but it’s the wrong version or can’t be found. Ugh.

If you are not that big into automation testing, you can probably skip this whole post. But if you use Specflow/Selenium even semi-regularly, I highly recommend bookmarking this post because I’m 99% sure you will hit atleast one of these bugs when setting up a new testing project.

Chrome, Gecko  and IE Drivers

While the below is mostly about using ChromeDriver, some of this is also applicable for Gecko (Firefox), and IE drivers. Obviously the error messages will be slightly different, but it’s also highly likely you will run into very similar issues.

Adding ChromeDriver.exe To Your Project

The first thing to note is that you’ve probably added the “Selenium.WebDriver” and maybe “Specflow” nuget packages. These however *do not* contain the actual ChromeDriver executable. They only contain the C# code required to interact with the driver, but *not* the driver itself. It is incredibly confusing at first but kinda makes sense because you may want to only use Chrome or only Firefox or a combination etc. So it’s left up to you to actually add the required driver EXEs.

If you try and run your selenium tests without it, it will actually compile all fine and look like starting only to bomb out with :

The chromedriver.exe file does not exist in the current directory or in a directory on the PATH environment variable.

Depending on your setup, it can also bomb out with :

The file C:\path\to\my\project\chromedriver.exe does not exist. 
The driver can be downloaded at http://chromedriver.storage.googleapis.com/index.html

So there are two ways to add ChromeDriver to your project. The first is that you can install a nuget package that will write it to your bin folder when building. The most common nuget package that does this is here : https://www.nuget.org/packages/Selenium.WebDriver.ChromeDriver/

But a quick note, as we will see below, this only works if everywhere you run the tests has the correct version of chrome that matches the driver. What?! You didn’t know that? That’s right. The version of ChromeDriver.exe will have a version like 79.0.1.1 that will typically only be able to run on machines that have chrome version 79 installed. The nuget package itself is typically marked with the version of Chrome you need, so it’s easy to figure out, but can still be a big pain in the butt to get going.

So with that in mind, the other option is to actually download the driver yourself from the chromium downloads page : https://chromedriver.chromium.org/downloads

You need to then drop the exe into your project. And make sure it’s set to copy if newer for your build. Then when building, it should show up in your bin folder. Personally, I found the manual download of the chromium driver to be handy when working in an enterprise environment where the version of chrome might be locked down by some group policy, or you are working with others who may have wildly different versions of chrome and you can do funky things like have different versions for different developers.

Passing The ChromeDriver Location

So you’ve downloaded ChromeDriver and when you build, you can see it in your bin folder, but everything is still blowing up with the same error, what gives?!

One of the more irritating things I found is that in so many tutorials, they new’d up a chromedriver instance like so :

ChromeDriver = new ChromeDriver();

Now this may have worked in .NET Framework (I haven’t tried), but atleast for me in .NET Core, this never works. I think there must be something inside the constructor of ChromeDriver that looks up where it’s current executable is running (e.g. where the Bin folder is), and in .NET Core this must be different from Full Framework.

In anycase, you can change the constructor to instead take the folder location where it can find the driver. In my case I want that to be the bin folder :

ChromeDriver = new ChromeDriver(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location));

Obviously you can go ahead and change the path to anything which is super handy for differing dev setups. For example you could ask each dev to maintain their own version of chromedriver.exe somewhere on their C:\ drive, and then just pass that location into the constructor. Meaning that each developer can have a completely different version of chrome, and things will still run perfectly fine.

Versions Matter

We kinda touched on it above, but versions of ChromeDriver have to match the actual version of Chrome on the machine. If you are getting errors like so :

session not created: This version of ChromeDriver only supports Chrome version XX

Then you have a mismatch between versions.

The easiest way to rectify the issue is to manually download the correct version of ChromeDriver from here : https://chromedriver.chromium.org/downloads and force your code to use it. If you are using a nuget package for the driver, then it’s highly likely you would need to switch away from it to a manual setup to give you better control over versioning.

Azure Devops (And Others) Have ChromeDriver Environment Variables

This is one that I really wish I knew about sooner. When I tried to run my Selenium tests on Azure Devops, I was getting version issues where the version of Chrome on my hosted build agent was just slightly different from the one on my machine. I tried to do all sorts of crazy things by swapping our the exe version etc, but then I found buried in a help doc that there is actually an environment variable named ChromeWebDriver that has the full path to a chromedriver that is guaranteed to match that of the chrome browser on the agent. So I wrote some quick code that if I was running inside Azure Devops, to grab that environment variable and pass that into my ChromeDriver constructor.

Again, this is only for Azure Devops. But if you are using Gitlab, Bamboo, TeamCity, whatever! Check to see if there is an environment variable on hosted agents that carries the location of ChromeDriver.

If you are using your own build agents, then it’s also a good idea to think about following the same pattern. It’s super handy to have the Build Agent look after it’s own versions rather than wrangling something in code to fudge it all.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Opening Excel files in code has been a painful experience long before .NET Core came along. In many cases, you actually needed the Excel application installed on the target/users machine to be able to open excel files via code. If you’ve ever had to use those “OLE DB Jet ” queries before, you know it’s not a great experience. Luckily there are some pretty good open source solutions now that don’t require excel on the target machine. This is good for Windows users so that you don’t have to install excel on a target users machine or web server, but also for people hosting .NET Core applications on Linux (And even Mac/ARM) – where Excel is obviously no where to be seen!

My methodology for this article is pretty simple. Create a standardized excel workbook with a couple of sheets, couple of formulas, and a couple of special formatting cases. Read the same data out in every single library and see which one works the best for me. Simple! Let’s get going!

Note On CSV Formats

I should note that if you are reading a CSV, or more so a single excel sheet that doesn’t have formulas or anything “excel” specific on the sheet, you should instead just parse it using standard CSV technique. We have a great article here on parsing CSV in .NET Core that you should instead follow. CSV parsers are great for taking tabular data and deserializing it into objects and should be used where they can.

Example Data

I figure the best way to compare the different libraries on offer is to create a simple spreadsheet to compare the different ways we can read data out. The spreadsheet will have two “sheets”, where the second sheet references the first.

Sheet 1 is named “First Sheet” and looks like so :

Notice that cell A2 is simply the number “1”. Then in column B2, we have a reference to cell A2. This is because we want to check if the libraries allow us to not only get the “formula” from the cell, but also what the computed value should be.

We are also styling cell A2 with a font color of red, and B2 has a full border (Although hard to see as I’m trying to show the fomula). We will try and extract these styling elements out later.

Sheet 2 is named “Second Sheet” and looks like so :

So we are doing a simple “SUM” formula and referencing the first sheet. Again, this is so we can test getting both the formula and the computed value, but this time across different sheets. It’s not complicated for a person used to working with Excel, but let’s see how a few libraries handle it.

In general, in my tests I’m looking for my output to always follow the same format of :

Sheet 1 Data
Cell A2 Value   : 
Cell A2 Color   :
Cell B2 Formula :
Cell B2 Value   :
Cell B2 Border  :

Sheet 2 Data
Cell A2 Formula :
Cell A2 Value   :

That way when I show the code, you can pick the library that makes the most sense to you.

EPPlus

When I first started hunting around for parsing excel in .NET Core, I remembered using EPPlus many moons ago for some very lightweight excel parsing. The nuget package can be found here : https://www.nuget.org/packages/EPPlus/. It’s also open source so you can read through the source code if that’s your thing here : https://github.com/JanKallman/EPPlus

The code to read our excel spreadsheet looks like so :

static void Main(string[] args)
{
    using(var package = new ExcelPackage(new FileInfo("Book.xlsx")))
    {
        var firstSheet = package.Workbook.Worksheets["First Sheet"];
        Console.WriteLine("Sheet 1 Data");
        Console.WriteLine($"Cell A2 Value   : {firstSheet.Cells["A2"].Text}");
        Console.WriteLine($"Cell A2 Color   : {firstSheet.Cells["A2"].Style.Font.Color.LookupColor()}");
        Console.WriteLine($"Cell B2 Formula : {firstSheet.Cells["B2"].Formula}");
        Console.WriteLine($"Cell B2 Value   : {firstSheet.Cells["B2"].Text}");
        Console.WriteLine($"Cell B2 Border  : {firstSheet.Cells["B2"].Style.Border.Top.Style}");
        Console.WriteLine("");

        var secondSheet = package.Workbook.Worksheets["Second Sheet"];
        Console.WriteLine($"Sheet 2 Data");
        Console.WriteLine($"Cell A2 Formula : {secondSheet.Cells["A2"].Formula}");
        Console.WriteLine($"Cell A2 Value   : {secondSheet.Cells["A2"].Text}");
    }
}

Honestly what can I say. This was *super* easy and worked right out of the box. It picks up formulas vs text perfectly! The styles on our first sheet was also pretty easy to get going. The border is slightly annoying because you have to check the “Style” of the border, and if it’s a style of “None”, then it means there is no border (As opposed to a boolean for “HasBorder” or similar). But I think I’m just nit picking, EPPlus just works!

NPOI

NPOI is another open source option with a Github here : https://github.com/tonyqus/npoi and Nuget here : https://www.nuget.org/packages/NPOI/. It hasn’t had a release in over a year which isn’t that bad because it’s not like Excel itself has tonnes of updates throughout the year, but the Issues list on Github is growing a bit with a fair few bugs so keep that in mind.

The code to read our data using NPOI looks like so :

…..

…Actually you know what. I blew a bunch of time on this to try and work out the best way to use NPOI and the documentation is awful. The wiki is here : https://github.com/tonyqus/npoi/wiki/Getting-Started-with-NPOI but it has a few samples but most/all of them are about creating excel workbooks not reading them. I saw they had a link to a tutorial on how to read an Excel file which looked promising, but it was literally reading the spreadsheet and then dumping the text out.

After using EPPlus, I just didn’t see any reason to continue with this one. Almost every google answer will lead you to StackOverflow with people using NPOI with such specific use cases that it never really all pieced together for me.

ExcelDataReader

ExcelDataReader appeared in a couple of stackoverflow answers on reading excel in .NET Core. Similar to others in this list, it’s open source here : https://github.com/ExcelDataReader/ExcelDataReader and on Nuget here : https://www.nuget.org/packages/ExcelDataReader/

I wanted to make this work but…. It just doesn’t seem intuitive at all. ExcelDataReader works on the premise that you are reading “rows” and “columns” sequentially in almost a CSV fashion. That sort of works but if you are looking for a particular cell, it’s rough as hell.

Some example code :

static void Main(string[] args)
{
    System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
    using (var stream = File.Open("Book.xlsx", FileMode.Open, FileAccess.Read))
    {
        using (var reader = ExcelReaderFactory.CreateReader(stream))
        {
            do
            {
                while (reader.Read()) //Each ROW
                {
                    for (int column = 0; column < reader.FieldCount; column++)
                    {
                        //Console.WriteLine(reader.GetString(column));//Will blow up if the value is decimal etc. 
                        Console.WriteLine(reader.GetValue(column));//Get Value returns object
                    }
                }
            } while (reader.NextResult()); //Move to NEXT SHEET

        }
    }
}

The first line in particular is really annoying (Everything blows up without it). But you’ll notice that we are moving through row by row (And sheet by sheet) trying to get values. Ontop of that, calling things like “GetString” doesn’t work if the value is a decimal (Implicit casts would have been better IMO). I also couldn’t find any way to get the actual formula of the cell. The above only returns the computed results.

I was going to slog my way through and actually get the result we were looking for, but it’s just not a library I would use.

Syncfusion

Syncfusion is one of those annoying companies that create pay-to-use libraries, upload them to nuget, and then in small print  say you need to purchase a license or else. Personally, I would like to see Microsoft not allow paid libraries into the public Nuget repo. I’m going to include them here but their licensing starts at $995 per year, per developer, so I don’t see much reason to use it for the majority of use cases. The nuget page can be found here https://www.nuget.org/packages/Syncfusion.XlsIO.Net.Core/

The code looks like :

static void Main(string[] args)
{
    ExcelEngine excelEngine = new ExcelEngine();
    using (var stream = File.Open("Book.xlsx", FileMode.Open, FileAccess.Read))
    {
        var workbook = excelEngine.Excel.Workbooks.Open(stream);

        var firstSheet = workbook.Worksheets["First Sheet"];
        Console.WriteLine("Sheet 1 Data");
        Console.WriteLine($"Cell A2 Value   : {firstSheet.Range["A2"].DisplayText}");
        Console.WriteLine($"Cell A2 Color   : {firstSheet.Range["A2"].CellStyle.Font.RGBColor.Name}");
        Console.WriteLine($"Cell B2 Formula : {firstSheet.Range["B2"].Formula}");
        Console.WriteLine($"Cell B2 Value   : {firstSheet.Range["B2"].DisplayText}");
        Console.WriteLine($"Cell B2 Border  : {firstSheet.Range["B2"].CellStyle.Borders.Value}");
        Console.WriteLine("");

        var secondSheet = workbook.Worksheets["Second Sheet"];
        Console.WriteLine($"Sheet 2 Data");
        Console.WriteLine($"Cell A2 Formula : {secondSheet.Range["A2"].Formula}");
        Console.WriteLine($"Cell A2 Value   : {secondSheet.Range["A2"].DisplayText}");
    }
}

So not bad. I have to admit, I fiddled around trying to understand how it worked out borders (As the above code doesn’t work), but gave up. The font color also took some fiddling where the library returns non standard objects as the color. Some of the properties for the actual data are also a bit confusing where you have value, text, displaytext etc. All returning slightly different things so you sort of have to just spray and pray and see which one works.

If EPPlus didn’t exist, and Syncfusion wasn’t fantastically overpriced, this library would actually be pretty good.

TL;DR;

Use EPPlus. https://github.com/JanKallman/EPPlus

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on creating Windows Services in .NET Core.

Part 1 – The “Microsoft” Way
Part 2 – The “Topshelf” Way
Part 3 – The “.NET Core Worker” Way


This article has been a long time coming I know. I first wrote about creating Windows Services in .NET Core all the way back in September (3 months ago). And on that post a helpful reader (Shout out to Saeid!) immediately commented that in .NET Core 3, there is a brand new way to create Windows Services! Doh! It reminds me of the time I did my 5 part series on Azure Webjobs in .NET Core, and right as I was completing the final article, a new version of the SDK got released with a tonne of breaking changes making me have to rewrite a bunch.

Thankfully, this isn’t necessarily a “breaking change” to how you create Windows Services, the previous articles on doing it the “Microsoft Way” and the “Topshelf Way” are still valid, but this is just another way to get the same result (Maybe with a little less cursing to the programming gods).

The Setup

The first thing you need to know is that you need .NET Core 3.0 installed. At the time of writing, .NET Core 3.1 has just shipped and Visual Studio should be prompting you to update anyway. But if you are trying to do this in a .NET Core 2.X project, it’s not going to work.

If you like creating projects from the command line, you need to create a new project as the type “worker” :

dotnet new worker

If you are a Visual Studio person like me, then there is actually a template inside Visual Studio that does the exact same thing.

Doing this creates a project with essentially two files. You will have your program.cs which is basically the “bootstrapper” for your app. And then you have something called worker.cs which is where the logic for your service goes.

It should be fairly easy to spot, but to add extra background services to this program to run in parallel, you just need to create a new class that inherits from BackgroundService :

public class MyNewBackgroundWorker : BackgroundService
{
    protected override Task ExecuteAsync(CancellationToken stoppingToken)
    {
        //Do something. 
    }
}

Then in our program.cs file, we just add the worker to our service collection :

.ConfigureServices((hostContext, services) =>
{
    services.AddHostedService<Worker>();
    services.AddHostedService<MyNewBackgroundWorker>();
});

AddHostedService has actually been in the framework for quite some time as a “background service” type task runner that typically runs underneath your web application. We’ve actually done an article on hosted services in ASP.NET Core before, but in this case, the hosted service is basically the entire app rather than it being something that runs behind the scenes of your web app.

Running/Debugging Our Application

Out of the box, the worker template has a background service that just pumps out the datetime to the the console window. Let’s just press F5 on a brand new app and see what we get.

info: CoreWorkerService.Worker[0]
      Worker running at: 12/07/2019 08:20:30 +13:00
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.

We are up and running immediately! We can leave our console window open to debug the application, or close the window to exit. Compared to the hell we went through trying to debug our Windows Service when creating it the “Microsoft” way, this is like heaven.

Another thing to note is that really what we have infront of us is a platform for writing console applications. In the end we are only writing out the time to the console window, but we are also doing that via Dependency Injection creating a hosted worker. We can use this DI container to also inject in repositories, set environments, read configuration etc.

The one thing it’s not yet is a windows service…

Turning Our App Into A Windows Service

We need to add the following package to our app :

Install-Package Microsoft.Extensions.Hosting.WindowsServices

Next, head to our program.cs file and modify it by adding a call to “UseWindowsService()”.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureServices((hostContext, services) =>
    {
        services.AddHostedService<Worker>();
    }).UseWindowsService();

And that’s it!

Running our application normally is just the same and everything functions as it was. The big difference is that we can now install everything as a service.

To do that, first we need to publish our application. In the project directory we run :

dotnet publish -r win-x64 -c Release

Note in my case, I’m publishing for Windows X64 which generally is going to be the case when deploying a Windows service.

Then all we need to do is run the standard Windows Service installer. This isn’t .NET Core specific but is instead part of Windows :

sc create TestService BinPath=C:\full\path\to\publish\dir\WindowsServiceExample.exe

As always, the other commands available to you (including starting your service) are :

sc start TestService
sc stop TestService
sc delete TestService

And checking our services panel :

It worked!

Installing On Linux

To be honest, I don’t have a hell of a lot of experience with Linux. But the general gist is…

Instead of installing Microsoft.Extensions.Hosting.WindowsServices , you need to install Microsoft.Extensions.Hosting.Systemd . And then instead of calling UseWindowsService()  you’ll instead call UseSystemd() .

Obviously your dotnet publish and installation commands will vary, but more or less you can create a “Windows Service” that will also run on Linux!

Microsoft vs Topshelf vs .NET Core Workers

So we’ve now gone over 3 different ways to create Windows Services. You’re probably sitting there going “Well… Which one should I chose?”. Immediately, let’s bin the first Microsoft old school way of going things. It’s hellacious to debug and really doesn’t have anything going for it.

That leaves us with Topshelf and .NET Core workers. In my opinion, I like .NET Core Workers for fitting effortlessly into the .NET Core ecosystem. If you’re already developing in ASP.NET Core, then everything just makes sense creating a worker. On top of that, when you create a BackgroundService, you can actually lift and shift that to run inside an ASP.NET Core website at any point which is super handy. The one downside is the installation. Having to use SC commands can be incredibly frustrating at times and Topshelf definitely has it beat there.

Topshelf in general is very user friendly and has the best installation process for Windows Services. But it’s also another library to add to your list and another “framework” to learn, which counts against it.

Topshelf or .NET Core Workers, take your pick really.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

These days, people who are considered “fullstack” developers are considered to be unicorns. That is, it’s seemingly rare for new grad developers to be fullstack, and instead they are picking a development path and sticking with it. It wasn’t really like that 10-15 years ago. When I first started developing commercially I obviously got to learning ASP.NET (Around .NET 2-ish), but you better believe I had to learn HTML/CSS and the javascript “framework” of the day – jQuery. There was no “fullstack” or “front end” developers, you were just a “developer”.

Lately my javascript framework of choice has been Angular. I started with AngularJS 1.6 and took a bit of a break, but in the last couple of years I’ve been working with Angular all the way up to Angular 8. Even though Angular’s documentation is actually pretty good, there’s definitely been a few times where I feel I’ve cracked a chestnut of a problem and thought “I should really start an Angular blog to share this”. After all, that’s exactly how this blog started. In the dark days of .NET Core (before the first release even!), I was blogging here trying to help people running into the same issues as me.

And so, I’ve started Tutorials For Angular. I’m not going to profess to be a pro in Angular (Or even javascript), but I’ll be sharing various tips and tricks that I’ve run into in my front end development journey. Content is a bit light at the moment but I have a few long form articles in the pipeline on some really tricky stuff that stumped me when I got back into the Angular groove, so if that sounds like you, come on over and join in!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Rate limiting web services and APIs is nothing new, but I would say over the past couple of years the “leaky bucket” strategy of rate limiting has rised in popularity to the point where it’s almost a defacto these days. The leaky bucket strategy is actually quite simple – made even easier by the fact that it’s name comes from a very physical and real world scenario.

What Is A Leaky Bucket Rate Limit?

Imagine a 4 litre physical bucket that has a hole in it. The hole leaks out 1 litre of water every minute. Now you can dump 4 litres of water in the bucket all at once just fine, but still it will flow out at 1 litre per minute until it’s empty. If you did fill it immediately, after 1 minute, you could make 1 request but then have to wait another minute. Or you could wait 2 minutes and make 2 at once etc.  The general idea behind using a leaky bucket scenario over something like “1 call per second” is that it allows you to “burst” through calls until the bucket is full, and then wait a period of time until the bucket drains. However even while the bucket is draining you can trickle in calls if required.

I generally see it applied on API’s that to complete a full operation may take 2 or 3 API calls, so limiting to 1 per second is pointless. But allowing a caller to burst through enough calls to complete their operation, then back off, is maybe a bit more realistic.

Implementing The Client

This article is going to talk about a leaky bucket “client”. So I’m the one calling a web service that has a leaky bucket rate limitation in place. In a subsequent article we will look at how we implement this on the server end.

Now I’m the first to admit, this is unlikely to win any awards and I typically shoot myself in the foot with threadsafe coding, but here’s my attempt at a leaky bucket client that for my purposes worked a treat, but it was just for a small personal project so your mileage may vary.

class LeakyBucket
{
    private readonly BucketConfiguration _bucketConfiguration;
    private readonly ConcurrentQueue<DateTime> currentItems;
    private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);

    private Task leakTask;

    public LeakyBucket(BucketConfiguration bucketConfiguration)
    {
        _bucketConfiguration = bucketConfiguration;
        currentItems = new ConcurrentQueue<DateTime>();
    }

    public async Task GainAccess(TimeSpan? maxWait = null)
    {
        //Only allow one thread at a time in. 
        await semaphore.WaitAsync(maxWait ?? TimeSpan.FromHours(1));
        try
        {
            //If this is the first time, kick off our thread to monitor the bucket. 
            if(leakTask == null)
            {
                leakTask = Task.Factory.StartNew(Leak);
            }

            while (true)
            {
                if (currentItems.Count >= _bucketConfiguration.MaxFill)
                {
                    await Task.Delay(1000);
                    continue;
                }

                currentItems.Enqueue(DateTime.UtcNow);
                return;
            }
        }finally
        {
            semaphore.Release();
        }
                
    }

    //Infinite loop to keep leaking. 
    private void Leak()
    {
        //Wait for our first queue item. 
        while(currentItems.Count == 0)
        {
            Thread.Sleep(1000);
        }

        while(true)
        {
            Thread.Sleep(_bucketConfiguration.LeakRateTimeSpan);
            for(int i=0; i < currentItems.Count && i < _bucketConfiguration.LeakRate; i++)
            {
                DateTime dequeueItem;
                currentItems.TryDequeue(out dequeueItem);
            }
        }
    }
}

class BucketConfiguration
{
    public int MaxFill { get; set; }
    public TimeSpan LeakRateTimeSpan { get; set; }
    public int LeakRate { get; set; }
}

There is a bit to unpack there but I’ll do my best.

  • We have a class called BucketConfiguration which specifies how full the bucket can get, and how much it leaks.
  • Our main method is called “GainAccess” and this will be called each time we want to send a request.
  • We use a SemaphoreSlim just incase this is used in a threaded scenario so that we queue up calls and not get tangled up in our own mess
  • On the first call to gain access, we kick off a thread that is used to “empty” the bucket as we go.
  • Then we enter in a loop. If the current items on the queue is the max fill, then we just wait a second and try again.
  • When there is room on the queue, we pop our time on and return, thus gaining access.

Now I’ve used a queue here but you really don’t need to, it’s just helpful debugging which calls we are leaking etc. But really a blockingcollection or something similar is just as fine. Notice that we also kick off a thread to do our leaking. Because it’s done at a constant rate, we need a dedicated thread to be “dripping” out requests.

And finally, everything is async including our semaphore (If you are wondering why I didn’t just use the *lock* keyword, it can’t be used with async code). This means that we hopefully don’t jam up threads waiting to send requests. It’s not foolproof of course, but it’s better than hogging threads when we are essentially spinwaiting.

The Client In Action

I wrote a quick console application to show things in action. So for example :

static async Task Main(string[] args)
{
    LeakyBucket leakyBucket = new LeakyBucket(new BucketConfiguration
    {
        LeakRate = 1, 
        LeakRateTimeSpan = TimeSpan.FromSeconds(5), 
        MaxFill = 4
    });

    while (true)
    {
        await leakyBucket.GainAccess();
        Console.WriteLine("Hello World! " + DateTime.Now);
    }
}

Running this we get :

Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:31 PM
Hello World! 24/11/2019 5:08:36 PM
Hello World! 24/11/2019 5:08:41 PM
Hello World! 24/11/2019 5:08:46 PM

Makes sense. We do our run of 4 calls immediately, but then we have to back off to doing just 1 call every 5 seconds.

That’s pretty simple, but we can also handle complex scenarios such as leaking 2 requests instead of 1 every 5 seconds. We change our leaky bucket to :

LeakyBucket leakyBucket = new LeakyBucket(new BucketConfiguration
{
    LeakRate = 2, 
    LeakRateTimeSpan = TimeSpan.FromSeconds(5), 
    MaxFill = 4
});

And what do you know! We see our burst of 4 calls, then every 5 seconds we see us drop in another 2 at once.

Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:14 PM
Hello World! 24/11/2019 5:10:14 PM
Hello World! 24/11/2019 5:10:19 PM
Hello World! 24/11/2019 5:10:19 PM
Hello World! 24/11/2019 5:10:24 PM
Hello World! 24/11/2019 5:10:24 PM

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion


OK, let’s just get the Wikipedia definitions out of the way to begin with :

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
  2. Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.

While this may sound complicated, it’s actually much easier when you are working in the code. Infact it’s probably something you already do in (Especially in .NET Core) but you don’t even think about.

In simple terms, imagine we have a service that calls into a “repository”. Generally speaking, that repository would have an interface and (especially in .NET Core) we would use dependency injection to inject that repository into the service, however the service is still working off the interface (an abstraction), instead of the concrete implementation. On top of this, the service itself doesn’t know anything about how the repository actually gets the data. The repository could be calling a SQL Server, Azure Table Storage, a file on disk etc – but to the service, it really doesn’t matter. It just knows that it can depend on the abstraction to get the data it needs and now worry about any dependencies the repository may have.

Dependency Inversion vs Dependency Injection

Very commonly, people mix up dependency injection and dependency inversion. I know many years ago in an interview I was asked in an interview to list out SOLID and I said Dependency Injection as the “D”.

Broadly speaking, Dependency Injection is a way to achieve Dependency Inversion. Like a tool to achieve the principle. While Dependency Inversion is simply stating that you should depend on abstractions, and that higher level modules should not worry about dependencies of the lower level modules, Dependency Injection is a way to achieve that by being able to inject dependencies.

Another way to think about it would be that you could achieve Dependency Inversion by simply making liberal use of the Factory Pattern to create lower level modules (Thus abstracting away any dependencies they have), and have them return interfaces (Thus the higher level module depends on abstractions and doesn’t even know what the concrete class behind the interface is).

Even using a Service Locator pattern – which arguably some might say is dependency injection, could be classed as dependency inversion because you aren’t worrying about how the lower level modules are created, you just call this service locator thingie and magically you get a nice abstraction.

Dependency Inversion In Practice

Let’s take a look at an example that doesn’t really exhibit good Dependency Inversion behaviour.

class PersonRepository
{
    private readonly string _connectionString;
    private readonly int _connectionTimeout;

    public PersonRepository(string connectionString, int connectionTimeout)
    {
        _connectionString = connectionString;
        _connectionTimeout = connectionTimeout;
    }

    public void ConnectToDatabase()
    {
        //Go away and make a connection to the database. 
    }

    public void GetAllPeople()
    {
        //Use the database connection and then return people. 
    }
}
class MyService
{
    private readonly PersonRepository _personRepository;

    public MyService()
    {
        _personRepository = new PersonRepository("myConnectionString", 123);
    }
}

The problem with this code is that MyService very heavily relies on concrete implementation details of the PersonRepository. For example it’s required to pass it a connection string (So we are leaking out that it’s a SQL Server), and a connection timeout. The PersonRepository itself allows a “ConnectToDatabase” method which by itself is not terribly bad, but if it’s required to call “ConnectToDatabase” before we can actually call the “GetAllPeople” method, then again we haven’t managed to abstract away the implementation detail of this specifically being a SQL Server repository.

So let’s do some simple things to clean it up :

interface IPersonRepository
{
    void GetAllPeople();
}

class PersonRepository : IPersonRepository
{
    private readonly string _connectionString;
    private readonly int _connectionTimeout;

    public PersonRepository(string connectionString, int connectionTimeout)
    {
        _connectionString = connectionString;
        _connectionTimeout = connectionTimeout;
    }

    private void connectToDatabase()
    {
        //Go away and make a connection to the database. 
    }

    public void GetAllPeople()
    {
        connectToDatabase();
        //Use the database connection and then return people. 
    }
}
class MyService
{
    private readonly IPersonRepository _personRepository;

    public MyService(IPersonRepository personRepository)
    {
        _personRepository = personRepository;
    }
}

Very simple. I’ve created an interface called IPersonRepository that shields away implementation details. I’ve taken that repository and (not shown) used dependency injection to inject it into my service. This way my service doesn’t need to worry about connection strings or other constructor requirements. I also removed the “ConnectToDatabase” method from being public, the reason being my service shouldn’t worry about pre-requisites to get data. All it needs to know is that it calls “GetAllPeople” and it gets people.

Switching To A Factory Pattern

While writing this post, I realized that saying “Yeah so I use this magic thing called Dependency Injection” and it works isn’t that helpful. So let’s quickly write up a factory instead.

class PersonRepositoryFactory
{
    private string connectionString = "";//Get from app settings or similar. 
    private int connectionTimeout = 123;

    public IPersonRepository Create()
    {
        return new PersonRepository(connectionString, connectionTimeout);
    }
}

class MyService
{
    private readonly IPersonRepository _personRepository;

    public MyService()
    {
         _personRepository = new PersonRepositoryFactory().Create();
    }
}

Obviously not as nice as using Dependency Injection, and there are a few different ways to cut this, but the main points of dependency inversion are still there. Notice that still, our service does not have to worry about implementation details like connection strings and it still depends on the interface abstraction.

What’s Next

That’s it! You’ve made it to the end of the series on SOLID principles in C#/.NET. Now go out and nail that job interview!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion


The interface segregation principle can be a bit subjective at times, but the most common definition you will find out there is :

 No client should be forced to depend on methods it does not use

In simple terms, if you implement an interface in C# and have to throw NotImplementedExceptions you are probably doing something wrong.

While the above is the “definition”, what you will generally see this principle described as is “create multiple smaller interfaces instead of one large interface”. This is still correct, but that’s more of a means to achieve Interface Segregation rather than the principle itself.

Interface Segregation Principle In Practice

Let’s look at an example. If we can imagine that I have an interface called IRepository, and from this we are reading and writing from a database. For some data however, we are reading from a local XML file. This file is uneditable and so we can’t write to it (essentially readonly).

We still want to share an interface however because at some point we will move the data in the XML file to a database table, so it would be great if at that point we could just swap the XML repository for a database repository.

Our code is going to look a bit like this :

interface IRepository
{
    void WriteData(object data);
    object ReadData();
}

public class DatabaseRepository : IRepository
{
    public object ReadData()
    {
        //Go to the database and read data
        return new object();
    }

    public void WriteData(object data)
    {
        //Go to the database and write data
    }
}

public class XmlFileRepository : IRepository
{
    public object ReadData()
    {
        //Go to the database and read data
        return new object();
    }

    public void WriteData(object data)
    {
        //Don't allow a user to write data to the XML Repository.
        throw new NotImplementedException();
    }
}

Makes sense. But as we can see for our XMLFileRepository, because we don’t allow writing, we have to throw an exception (An alternative would to just have an empty method which is probably even worse). Clearly we are violating the Interface Segregation Principle because we are implementing an interface whose methods we don’t actually implement. How could we implement this more efficiently?

interface IReadOnlyRepository
{
    object ReadData();
}

interface IRepository : IReadOnlyRepository
{
    void WriteData(object data);
}

public class DatabaseRepository : IRepository
{
    public object ReadData()
    {
        //Go to the database and read data
        return new object();
    }

    public void WriteData(object data)
    {
        //Go to the database and write data
    }
}

public class XmlFileRepository : IReadOnlyRepository
{
    public object ReadData()
    {
        //Go to the database and read data
        return new object();
    }
}

In this example we have moved the reading portion of a repository to it’s own interface, and let the IRepository interface inherit from it. This means that any IRepository can also be cast and made into an IReadOnlyRepository. So if we swap out the XmlFileRepository for a DatabaseRepository, everything will run as per normal.

We can even inject in the DatabaseRepository into classes that are looking for IReadOnlyRepository. Very nice!

Avoiding Inheritance Traps

Some people prefer to not use interface inheritance as this can get messy fast, and instead have your concrete class implement multiple smaller interfaces :

interface IReadRepository
{
    object ReadData();
}

interface IWriteRepository
{
    void WriteData(object data);
}

public class DatabaseRepository : IReadRepository, IWriteRepository
{
    public object ReadData()
    {
        //Go to the database and read data
        return new object();
    }

    public void WriteData(object data)
    {
        //Go to the database and write data
    }
}

This means you don’t get tangled up with a long line of inheritance, but it does have issues when you have a caller that wants to both read *and* write :

public class MyService
{
    //What if this service also writes? I have to inject the write repository as well?
    public MyService(IReadRepository readRepository, IWriteRepository writeRepository) 
    {

    }
}

Ultimately it’s up to you but it’s just important to remember that these are just a means to an end. As long as a class is not forced to implement a method it doesn’t care about, then you are abiding by the Interface Segregation Principle.

What’s Next

Next is our final article in this series, the D in SOLID. Dependency Inversion.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on the SOLID design principles. You can start here or jump around using the links below!

S – Single Responsibility
O – Open/Closed Principle
L – Liskov Substitution Principle
I – Interface Segregation Principle
D – Dependency Inversion


What Is The Liskov Principle?

The Liskov Principle has a simple definition, but a hard explanation. First, the definition :

Types can be replaced by their subtypes without altering the desirable properties of the program.

So basically if I have something like this :

class MyType
{
    //Some code here
}

class MySubType : MyType
{
    //Some code here
}

class MyService
{
    private readonly MyType _myType;

    public MyService(MyType myType)
    {
        _myType = myType;
    }
}

If in the future I decide that MyService should depend on MySubType instead of MyType, theoretically I shouldn’t alter “the desirable properties of the program”. I put that in quotes because what does that actually mean? A large part of inheritance is extending functionality and therefore by definition it will alter the behaviour of the program in some way.

That last part might be controversial because I’ve seen plenty of examples of Liskov that state that overriding a method in a sub class (which is pretty common) to be a violation of the principle but I feel like in reality, that’s just going to happen and isn’t a design flaw. For example :

class MyType
{
    public virtual void DoSomething()
    {

    }
}

class MySubType : MyType
{
    public override void DoSomething()
    {
        //Do some extended functionality. 

        //Maybe I leave this here. But should it go at the start or at the end?
        base.DoSomething();
    }
}

Is this a violation? It could be because substituting the subtype may not “break” functionality, but it certainly changes how the program functions and could lead to unintended consequences. Some would argue that Liskov is all about behavioural traits, which we will look at later, but for now let’s look at some “hard” breaks. e.g. Things that are black and white wrong.

Liskov On Method Signatures

The thing is, C# is actually very hard to violate Liskov and that’s why you might find that examples out there are very contrived. Indeed, for the black and white rules that we will talk about below, you’ll notice that they are things that you really need to go out of your way (e.g. ignore compiler warnings) to break. (And don’t worry, I’ll show you how!).

Our black and white rules that no one will argue with you on Stackoverflow about are….

A method of a subclass can accept a parent type as a parameter (Contravariance)

The “idea” of this is that a subclass can override a method and accept a more general parameter, but not the opposite. So for example in theory Liskov allows for something like this :

class MyParamType
{

}
class MyType
{
    public virtual void DoSomething(MyParamType something)
    {

    }
}

class MySubType : MyType
{
    public override void DoSomething(object something)
    {
    }
}

But actually if you try this, it will blow up because in C# when overriding a method the method signature must be exactly the same. Infact if we remove the override keyword it just acts as a method overload and both would be available to someone calling the MySubType class.

A method of a subclass can return a sub type as a parameter (Covariance)

So almost the opposite of the above, when we return a type from a method in a subclass, Liskov allows it to be “more” specific than the original return type. So again, theoretically this should be OK :

class MyReturnType
{

}
class MyType
{
    public object DoSomething()
    {
        return new object();
    }
}

class MySubType : MyType
{
    public MyReturnType DoSomething()
    {
        return new MyReturnType();
    }
}

This actually compiles but we do get a warning :

'MySubType.DoSomething()' hides inherited member 'MyType.DoSomething()'. Use the new keyword if hiding was intended.

Yikes. We can get rid of this error message by following the instruction and changing our SubType to the following :

class MySubType : MyType
{
    public new MyReturnType DoSomething()
    {
        return new MyReturnType();
    }
}

Pretty nasty. I can honestly say in over a decade of using C#, I have never used the new  keyword to override a method like this. Never. And I feel like if you are having to do this, you should stop and think if there is another way to do it.

But the obvious issue now is that anything that was expecting an object back is now getting this weird “MyReturnType” back. Exceptions galore will soon follow. This is why it makes it to a hard rule because it’s highly likely your code will not compile doing this.

Liskov On Method Behaviours

Let’s jump into the murky waters on how changing the behaviour of a method might violate Liskov. As mentioned earlier, I find this a hard sell because the intention of overriding a method in C#, when the original is marked as virtual, is that you want to change it’s behaviour. It’s not that C# has been designed as an infallible language, but there are pros to inheritance.

Our behavioural rules are :

Eceptions that would not be thrown normally in the parent type can’t then be thrown in the sub type

To me this depends on your coding style but does make sense in some ways. Consider the following code :

class MyType
{
    public virtual void DoSomething()
    {
        throw new ArgumentException();
    }
}

class MySubType : MyType
{
    public override void DoSomething()
    {
        throw new NullReferenceException();
    }
}

If you have been using the original type for some time, and you are following the “Gotta catch ’em all” strategy of exceptions and trying to catch each and every exception, then you may be intending to catch the ArgumentException. But now when you switch to the subtype, it’s throwing a NullReferenceException which you weren’t expecting.

Makes sense, but the reality is that if you are overriding a method to change the behaviour in some way it’s an almost certainty that new exceptions will occur. I’m not particularly big on this rule, but I can see why it exists.

Pre-Conditions cannot be strengthened and Post-Conditions cannot be weakened in the sub type

Let’s look at pre-conditions first. The idea behind this is that if you were previously able to call a method with a particular input parameter, new “rules” should not be in place that now rejects those parameters. A trivial example might be :

class MyType
{
    public virtual void DoSomething(object input)
    {
    }
}

class MySubType : MyType
{
    public override void DoSomething(object input)
    {
        if (input == null)
            throw new NullReferenceException();
    }
}

Where previously there was no restriction on null values, there now is.

Conversely, we shouldn’t weaken the ruleset for returning objects. For example :

class MyType
{
    public virtual object DoSomething()
    {
        object output = null;
        //Do something

        //If our ouput it still null, return a new object. 
        return output ?? new object();
    }
}

class MySubType : MyType
{
    public override object DoSomething()
    {
        object output = null;
        //Do something
        return output;

    }
}

We could previously rely on the return object never being null, but now we may be returned a null object.

Similar to the exceptions rule, it makes sense that when extending behaviour we might have no choice but to change how we handle inputs and outputs. But unlike the exceptions rule, I really like this rule and try and abide by it.

Limiting General Behaviour Changes

I just want to quickly show another example that possibly violates the Liskov principle, but is up to interpretation. Take a look at the following code :

class Jar
{
    public virtual void Pour()
    {
        //Pour out the jar. 
    }
}

class JarWithLid : Jar
{
    public bool IsOpen { get; set; }

    public override void Pour()
    {
       if(IsOpen)
       {
            //Only pour if the jar is open. 
       }
    }
}

A somewhat trivial example but if someone is depending on the class of “Jar” and calling Pour, everything is going fine. But if they then switch to using JarWithLid, their code will no longer function as intended because they are required to open the Jar first.

In some ways, this is covered by the “pre-Conditions cannot be strengthened” rule. It’s clearly adding a pre-existing condition that the jar must be opened before it can be poured. But on the other hand, in a more complex example a method may be hundreds of lines of code and have various new behaviours that affect how the caller might interact with the object. We might classify some of them as “pre-conditions” but we may also just classify them as behavioural changes that are inline with typical inheritance scenarios.

Overall, I feel like Liskov principle should be about limiting “breaking” changes when swapping types for subtypes, whether that “breaking” change is an outright compiler error, logic issue, or unexpected behavioural change.

What’s Next?

Up next is the I in SOLID. That is, The Interface Segregation Principle.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.