NOTE : This post was initially written in early 2017 (!!!) but has now been completely revamped in 2020 to be up to date with .NET Core 3+ Watch feature. The original article was written in pre 1.0 days and is no longer relevant. But this is still a killer feature and should be used far more often!

One of the most overlooked features of .NET Core CLI is the “dotnet watch” command. With it, it allows you to have a “live reload” of your ASP.NET Core site running without having to either run the “dotnet run” command, or worse do the “Stop the process in Visual Studio. Write your changes. Recompile and run again” routine. The latter can be very annoying when all you are trying to do is do a simple one line fix.

If you are used to watches in other languages/tooling (Especially task runners like Gulp), then you will know how much of a boost watches are to productivity.

The Basics

I highly recommend creating a simple ASP.NET Core project to run through this tutorial with. The tooling can be a little finicky with large projects so it’s easier to get up and running with something small, then take what you’ve learned onto something a little larger.

For this demo, I have a simple controller that has a get method that returns a string value.

[Route("api/home")]
public class HomeController : Controller
{
	[HttpGet]
	public string Get()
	{
		return "Old Value";
	} 
}

Open a command prompt in your project directory, a terminal in VS Code, or even the Package Manager Console in Visual Studio and run the following command :

dotnet watch run

Note that if you previously have had to run the “dotnet run” command with other flags (e.g. “dotnet run -f net451” to specify the framework), this will still work using the watch command. Essentially it’s saying “Do a watch, and when something changes, do ‘this’ thing” which in our case is the run command.  You should see something similar to the following :

dotnet watch run
watch : Started
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development

This means you are up and running. If I go to http://localhost:5000/api/home, I should see my controller return “Old Value”.

Now I head back to my controller and I change the Get action to instead return “New Value”. As soon as I hit save on this file I see the following in my console window :

watch : Exited
watch : File changed: C:\Projects\WatchExample\Controllers\HomeController.cs
watch : Started
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000

What we see is that the watch immediately exits, and it tells us that a file has been changed in HomeController.cs and so it starts recompiling immediately. When we browse to http://localhost:5000/api/home we see our “New Value” shown and we didn’t have to do anything else in terms of manually recompiling or running a new dotnet run command.

On larger projects, the recompile process can actually be a little slow so it’s not always an instant “change code and immediately refresh the browser” moment. Especially if you have a large dependency tree that means several projects need to be rebuilt before the site is back up and running. But it’s still going to be faster than whatever manual process you are used to.

Debugging With dotnet watch

So previously, debugging while using dotnet watch was almost a fruitless exercise. Each time your watch started and ran, a new “dotnet.exe” process would spin up. But the issue was it was impossible to find the “right” process to attach your debugger to. You might have a half dozen or so dotnet.exe processes running and you sort of had to wing it and pick the one that had been created most recent and hope for the best. Then each time you made a change, a *new* dotnet.exe would be spun up and your attached debugger was useless with you having to start the attach to debugger process all over again. Ugh!

As of .NET Core 3+, this is now much much easier.

Suppose I have my project up and running on a watch. In Visual Studio I simply go  Debug -> Attach To Debugger. I then filter all processes by the actual name of my project. In my case I called my project “WatchExample”, so I just start typing that into the filter box.

I click the Attach button and I’m away! Unfortunately each time you make a change the debugger is stopped while your project recompiles, but a helpful hint is to learn the “Reattach To Process” hotkey which in default Visual Studio key bindings is “Shift + Alt + P”. This immediately attaches the debugger to the last process it was on (Which in our case, is the our project exe), and we are away debugging again!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post we talked about setting up IIS to host a .NET Core web app, and since then, I’ve had plenty of questions regarding why does ASP.NET Core even have the Kestrel Web Server (The inbuilt web server of the .NET Core platform), if you just host it in IIS anyway? Especially now that .NET Core can now run InProcess with IIS, it’s almost like having Kestrel is redundant. In some ways that may be true and for *most* use cases, you will want to use IIS. But here’s my quick rundown of why that might be (or might not be) the case.

Kestrel Features (Or Lack Thereof)

Now Kestrel is pretty featured, but it’s actually lacking a few fundamentals that may make it a bit of a blocker. Some of which are (but not limited to) :

  • HTTP access logs aren’t collected in Kestrel. For some, this doesn’t matter. But for others that use these logs as a debugging tool (Or pump them to something like Azure Diagnostics etc), this could be an issue.
  • Multiple apps on the same port is not supported in Kestrel simply by design. So when hosting on IIS, IIS itself listens on port 80 and “binds” websites to specific URLs. Kestrel on the other hand binds to an IP (Still with a website URL if you like), but you can’t then start another .NET Core Kestrel instance on the same port.  It’s a one to one mapping.
  • Windows Authentication does not exist on Kestrel as it’s cross platform (more on that later).
  • IIS has a direct FTP integration setup if you deploy via this method (Kestrel does not)
  • Request Filtering (e.g. Blocking access to certain file extensions, folders, verbs etc) is much more fully featured in IIS (Although some of this can be “coded” in Kestrel in some cases).
  • Mime Type Mapping (e.g. A particular file extension being mapped to a particular mime type on response) is much better in IIS.

I would note that many feature comparisons floating around out there are very very dated. Things like request limits (e.g. Max file upload size/POST body size) and SSL certificates have been implemented in Kestrel for a few years now. Even the above list may slowly shrink over time, but for now, just know there there are certainly things inside Kestrel that you may be used to inside IIS.

Cross Platform

Obviously .NET Core and by extension, Kestrel, is cross platform. That means it runs on Windows, Linux and Mac. IIS on the other hand is Windows only and probably forever will be. For those not on Windows systems, the choice of even using IIS is a non existent one. In these cases, you are comparing Kestrel to things like NGINX which can act as a reverse proxy and forward requests to Kestrel.

In some of these cases, especially if you don’t have experience using NGINX or Apache, then Kestrel is super easy to run on Linux (e.g. Double click your published app).

Server Management

Just quickly anecdotally, the management of IIS is much easier than managing a range of Kestrel web services. Kestrel itself can be run from the command line, but therein lies the issue where you often have to set up your application as a Windows Service (More info here : Hosting An ASP.NET Core Web App As A Windows Service In .NET Core 3) so that it will automatically start on server restarts etc. You end up with all these individual “Windows Services” that you have to install/manage separately.  On top of that, everything being code based can be both a boon for source control and IAC, but it can also be a headache to configure features across a range of applications. If you are used to setting things up in IIS and think that it’s a good experience, then you are better at sticking with it.

Existing Setup Matters

Your existing setup may affect your decision to use Kestrel vs IIS in two very distinct ways.

If you are already using IIS, and you want to keep it that way and manage everything through one portal, then you are going to pick IIS no matter what. And it makes sense, keeping things simple is almost always the right route. And IIS has great support for hosting .NET Core web apps (Just check our guide out here : Hosting An ASP.NET Core Web Application In IIS).

But conversely, you may have IIS installed already, but it isn’t set up to host .NET Core apps and you don’t want to (or can’t) install anything on the machine. In these cases, Kestrel is great for sitting side by side with your existing web server and it running completely in isolation. It can run listening to a non standard port, leaving your entire IIS setup intact. This also goes for hosting on a machine that doesn’t currently have IIS installed, Kestrel can be stood up immediately from code with little to no configuration without modifying the machine’s features or running an installer of any kind. It’s basically completely portable.

Microsoft’s Recommendation

You’ll often find articles mentioning that Microsoft recommends using a reverse proxy, whether that be IIS, NGINX or Apache, to sit infront of Kestrel and be your public facing web server/proxy. Anecdotally, I’ve found this to be mentioned less and less as Kestrel gains more features. You will actually struggle to find mention of *not* using Kestrel on public facing websites in the official documentation nowadays (For .NET Core 3+). That’s not to say that it’s a good idea to use Kestrel for everything, just that it’s less of a security risk to these days.

My Final Take

Personally,  I use both.

I use IIS as my defacto choice as 9 times out of 10, when working in an Windows environment, there is already a web server setup hosting .NET Framework apps. It just makes things easier to setup and manage, especially for developers that are new to .NET Core, they don’t have another paradigm to learn.

I use Kestrel generally when hosting smaller API’s that either don’t have IIS setup, don’t need IIS setup (e.g. a web app that just gets run on the local machine for local use), or have IIS setup but they don’t want to install the hosting bundle. Personally, I generally don’t end up using many IIS features like Windows Auth or FTP, so I don’t miss these features.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

For the past few years I’ve been almost exclusively using Azure’s PAAS Websites to host my .NET Core applications. Whereby I set up my Azure Devops instance to point to my Azure Website, and at the click of the button my application is deployed and you don’t really have to think too hard about “how” it’s being hosted.

Well, recently I had to set up a .NET Core application to run on a fresh server behind IIS and while relatively straight forward, there were a few things I wish I knew beforehand. Nothing’s too hard, but some guides out there are waaayyy overkill and take hours to read let alone implement what they are saying. So hopefully this is a bit more of a straight forward guide.

You Need The ASP.NET Core Hosting Bundle

One thing that I got stuck on early on was that for .NET Core to work inside IIS, you actually need to do an install of a “Hosting Module” so that IIS knows how to run your app.

This actually frustrated me a bit at first because I wanted to do “Self Contained” deploys where everything the app needed to run was published to the server. So… If I’m publishing what essentially amounts to the full runtime with my app, why the hell do I still need to install stuff on the server!? But, it makes sense. IIS can’t just magically know how to forward requests to your app, it needs just a tiny bit of help. Just incase someone is skimming this post, I’m going to bold it :

Self contained .NET Core applications on IIS still need the ASP.NET Core hosting bundle

So where do you get this “bundle”. Annoyingly it’s not on the main .NET Core homepage and you need to go to the specific version to get the latest version. For example here : https://dotnet.microsoft.com/download/dotnet-core/3.1.

It can be maddening trying to find this particular download link. It will be on the right hand side buried in the runtime for Windows details.

Note that the “bundle” is the module packaged with the .NET Core runtime. So once you’ve installed this, for now atleast, self contained deployments aren’t so great because you’ve just installed the runtime anyway. Although for minor version bumps it’s handy to keep doing self contained deploys because you won’t have to always keep pace with the runtime versions on the server.

After installing the .NET Core hosting bundle you must restart the server OR run an IISReset. Do not forget to do this!

In Process vs Out Of Process

So you’ve probably heard of the term “In Process” being bandied about in relation to .NET Core hosting for a while now. I know when it first came out in .NET Core 2.2, I read a bit about it but it wasn’t the “default” so didn’t take much notice. Well now the tables have turned so to speak, so let me explain.

From .NET Core 1.X to 2.2, the default way IIS hosted a .NET Core application was by running an instance of Kestrel (The .NET Core inbuilt web server), and forwarding the requests from IIS to Kestrel. Basically IIS acted as a proxy. This works but it’s slow since you’re essentially doing a double hop from IIS to Kestrel to serve the request. This method of hosting was dubbed “Out Of Process”.

In .NET Core 2.2, a new hosting model was introduced called “In Process”. Instead of IIS forwarding the requests on to Kestrel, it serves the requests from within IIS. This is much faster at processing requests because it doesn’t have to forward on the request to Kestrel. This was an optional feature you could turn on by using your csproj file.

Then in .NET Core 3.X, nothing changed per-say in terms of how things were hosted. But the defaults were reversed so now In Process was the default and you could use the csproj flag to run everything as Out Of Process again.

Or in tabular form :

VersionSupports Out Of ProcessSupports In ProcessDefault
.NET Core <2.2YesNoN/A
.NET Core 2.2YesYesOut Of Process
.NET Core 3.XYesYesIn Process

Now to override the defaults, you can add the following to your csproj file (Picking the correct hosting model you want).

<PropertyGroup>
  <AspNetCoreHostingModel>InProcess/OutOfProcess</AspNetCoreHostingModel>
</PropertyGroup>

As to which one you should use? Typically, unless there is a specific reason you don’t want to use it, InProcess will give you much better performance and is the default in .NET Core 3+ anyway.

After reading this section you are probably sitting there thinking… Well.. So I’m just going to use the default anyway so I don’t need to do anything? Which is true. But many guides spend a lot of time explaining the hosting models and so you’ll definitely be asked questions about it from a co-worker, boss, tech lead etc. So now you know!

UseIIS vs UseIISIntegration

There is one final piece to cover before we actually get to setting up our website. Now *before* we got the “CreateDefaultBuilder” method as the default template in .NET Core, you had to build your processing pipeline yourself. So in your program.cs file you would have something like :

var host = new WebHostBuilder()
	.UseKestrel()
	.UseContentRoot(Directory.GetCurrentDirectory())
	.UseIISIntegration()
	.UseStartup<Startup>()
	.Build();

So here we can actually see that there is a call to UseIISIntegration . There is actually another call you may see out in the wild called UseIIS  without the integration. What’s the difference? It’s actually quite simple. UseIISIntegration  sets up the out of process hosting model, and UseIIS  sets up the InProcess model. So in theory, you pick one or the other but in practice CreateDefaultBuilder  actually calls them both and later on the SDK works out which one you are going to use based on the default or your csproj flag described above (More on that in the section below).

So again, something that will be handled for you by default, but you may be asked a question about.

Web.Config Shenanigans

One issue we have is that for IIS to understand how to talk to .NET Core, it needs a web.config file. Now if you’re using IIS to simply host your application but not using any additional IIS features, your application probably doesn’t have a web.config to begin with. So here’s what the .NET Core SDK does.

If you do not have a web.config in your application, when you publish your application, .NET Core will add one for you. It will contain details for IIS on how to start your application and look a bit like this :

<configuration>
  <location path="." inheritInChildApplications="false">
    <system.webServer>
      <handlers>
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      </handlers>
      <aspNetCore processPath="dotnet" arguments=".\MyTestApplication.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />
    </system.webServer>
  </location>
</configuration>

So all it’s doing is adding a handler for IIS to be able to run your application (Also notice it sets the hosting model to InProcess – which is the default as I’m running .NET Core 3.X).

If you do have a web.config, it will then append/modify your web.config to add in the the handler on publish. So for example if you are using web.config to configure.. I don’t know, mime types. Or maybe using some basic windows authorization. Then it’s basically going to append in the handler to the bottom of your own web.config.

There’s also one more piece to the puzzle. If for some reason you decide that you want to add in the handler yourself (e.g. You want to manage the arguments passed to the dotnet command), then you can actually copy and paste the above into your own web.config.

But. There is a problem. 

The .NET Core SDK will also always try and modify this web.config on publish to be what it *thinks* the handler should look like. So for example I copied the above and fudged the name of the DLL it was passing in as an argument. I published and ended up with this :

arguments=".\MyTestApplication.dll .\MyTestApplicationasd.dll"

Notice how it’s gone “OK, you are running this weird dll called MyTestApplicationasd.dll, but I think you should run MyTestApplication.dll instead so I’m just gonna add that for you”. Bleh! But there is a way to disable this!

Inside your csproj you can add a special flag like so :

<PropertyGroup>
  <TargetFramework>netcoreapp3.0</TargetFramework>
  <IsTransformWebConfigDisabled>true</IsTransformWebConfigDisabled>
</PropertyGroup>

This tells the SDK don’t worry, I got this. And it won’t try and add in what it thinks your app needs to run under IIS.

Again, another section on “You may need to know this in the future”. If you don’t use web.config at all in your application then it’s unlikely you would even realize that the SDK generates it for you when publishing. It’s another piece of the puzzle that happens in the background that may just help you in the future understand what’s going on under the hood when things break down.

An earlier version of this section talked about adding your own web.config to your project so you could point IIS to your debug folder. On reflection, this was bad advice. I always had issues with projects locking and the “dotnet build” command not being quite the same as the “dotnet publish”. So for that reason, for debugging, I recommend sticking with IIS Express (F5), or Kestrel by using the dotnet run command. 

IIS Setup Process

Now you’ve read all of the above and you are ready to actually set up your website. Well that’s the easy bit!

First create your website in IIS as you would a standard .NET Framework site :

You’ll notice that I am pointing to the *publish* folder. As described in the section above about web.config, this is because my particular application does not have a web.config of it’s own and therefore I cannot just point to my regular build folder, even if I’m just testing things out. I need to point to the publish folder where the SDK has generated a web.config for me.

You’ll also notice that in my case, I’m creating a new Application Pool. This is semi-important and I’ll show you why in a second.

Once you’ve create your website. Go to your Application Pool list, select your newly created App Pool, and hit “Basic Settings”. From there, you need to ensure that .NET CLR Version is set to “No Managed Code”. This tells IIS not to kick off the .NET Framework pipeline for your .NET Core app.

Obviously if you want to use shared application pools, then you should create a .NET Core app pool that sets up No Managed Code.

And that’s it! That’s actually all you need to know to get up and running using IIS to host .NET Core! In a future post I’ll actually go through some troubleshooting steps, most notably the dreaded HTTP Error 403.14 which can mean an absolute multitude of things.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Note, this tutorial is about hosting an ASP.NET Core web app as a windows service, specifically in .NET Core 3.

If you are looking to host a web app as a service in .NET Core 2, check out this other tutorial : Hosting An ASP.NET Core Web Application As A Windows Service In .NET Core 2

If you are looking to run a Windows Service as a “worker” or for background tasks, then you’ll want this tutorial : Creating Windows Services In .NET Core – Part 3 – The “.NET Core Worker” Way


This is actually somewhat of a duplicate of a previous post I did here. But that was using .NET Core 2+, and since then, things have changed quite a bit. Well… Enough that when I tried to follow my own tutorial recently I was wondering what the hell I was going on about when nothing worked for me this time around.

Why A Web App As A Windows Service

So this tutorial is about running a Web App as a Windows Service. Why would that ever be the case? Why would you not have a web app running under something like IIS? Or why a Windows Service specifically?

Well the answer to why not under IIS is that in some cases you may not have IIS on the machine. Or you may have IIS but it’s not set up to host .NET Core apps anyway. In these cases you can do what’s called a self contained deploy (Which we’ll talk about soon), where the web app runs basically as an exe that you can double click and suddenly you have a fully fledged web server up and running – and portable too.

For the latter, why a windows service? Well if we follow the above logic and we have an exe that we can just click to run, then a windows service just gives us the ability to run on startup, run in the “background” etc. I mean, that’s basically all windows services are right? Just the OS running apps on startup and in the background.

Running Our Web App As A Service

The first thing we need to do is make our app compile down to an EXE. Well.. We don’t have to but it makes things a heck of a lot easier. To do that, we just need to edit our csproj and add the OutputType of exe. It might end up looking like so :

<PropertyGroup>
  <TargetFramework>netcoreapp3.0</TargetFramework>
  <OutputType>Exe</OutputType>
</PropertyGroup>

In previous versions of .NET Core you had to install the package Microsoft.AspNetCore.Hosting.WindowsServices , however as of right now with .NET Core 3+, you instead need to use Microsoft.Extensions.Hosting.WindowsServices . I tried searching around for when the change happened, and why, and maybe information about differences but other than opening up the source code I couldn’t find much out there. For now, take my word on it. We need to install the following package into our Web App :

Install-Package Microsoft.Extensions.Hosting.WindowsServices

Now there is just a single line we need to edit. Inside program.cs, you should have a “CreateHostBuilder” method. You might already have some custom configuration going on, but you just need to tack onto the end “UseWindowsServices()”.

return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        }).UseWindowsService();

And that’s all the code changes required!

Deploying Our Service

… But we are obviously not done yet. We need to deploy our service right!

Open a command prompt as an Administrator, and run the following command in your project folder to publish your project :

dotnet publish -c Release

Next we can use standard Windows Service commands to install our EXE as a service. So move your command prompt to your output folder (Probably along the lines of C:\myproject\bin\Release\netcoreapp3.0\publish). And run something like so to install as a service :

sc create MyApplicationWindowsService binPath= myapplication.exe

Doing the full install is usually pretty annoying to do time and time again, so what I normally do is create an install.bat and uninstall.bat in the root of my project to run a set of commands to install/uninstall. A quick note when creating these files. Create them in something like Notepad++ to ensure that the file type is UTF8 *without BOM*. Otherwise you get all sorts of weird errors :

The contents of my install.bat file looks like :

sc create MyService binPath= %~dp0MyService.exe
sc failure MyService actions= restart/60000/restart/60000/""/60000 reset= 86400
sc start MyService
sc config MyService start=auto

Keep the weird %~dp0 character there as that tells the batch process the current directory (Weird I know!).

And the uninstall.bat :

sc stop MyService
timeout /t 5 /nobreak > NUL
sc delete MyService

Ensure these files are set to copy if newer in Visual Studio, and now when you publish your project, you only need to run the .bat files from an administrator command prompt and you are good to go!

Doing A Self Contained Deploy

We talked about it earlier that the entire reason for running the Web App as a Windows Service is so that we don’t have to install additional tools on the machine. But that only works if we are doing what’s called a “self contained” deploy. That means we deploy everything that the app requires to run right there in the publish folder rather than having to install the .NET Core runtime on the target machine.

All we need to do is run our dotnet release command with a few extra flags :

dotnet publish -c Release -r win-x64 --self-contained

This tells the .NET Core SDK that we want to release as self contained, and it’s for Windows.

Your output path will change from bin\Release\netcoreapp3.0\publish  to \bin\Release\netcoreapp3.0\win-x64\publish

You’ll also note the huge amount of files in this new output directory and the size in general of the folder. But when you think about it, yeah, we are deploying the entire runtime so it should be this large.

Content Root

The fact that .NET Core is open source literally saves hours of debugging every single time I work on a greenfields project, and this time around is no different. I took a quick look at the actual source code of what the call to UseWindowsService does here. What I noticed is that it sets the content root specifically for when it’s running under a Windows Service. I wondered how this would work if I was reading a local file from disk inside my app, while running as a Windows Service. Normally I would just write something like :

File.ReadAllText("myfile.json");

But… Obviously there is something special when running under a Windows Service context. So I tried it out and my API bombed. I had to check the Event Viewer on my machine and I found :

Exception Info: System.IO.FileNotFoundException: Could not find file 'C:\WINDOWS\system32\myfile.json'.

OK. So it looks like when running as a Windows Service, the “root” of my app thinks it’s inside System32. Oof. But, again, looking at the source code from Microsoft gave me the solution. I can simply use the same way they set the content root to load my file from the correct location :

File.ReadAllText(Path.Combine(AppContext.BaseDirectory, "myfile.json"));

And we are back up and running!

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Recently I’ve been working a lot in .NET Core 3.0 and 3.1 projects. Both upgrading existing 2.2 projects and a couple of new greenfields projects. The thing that I’ve had to do in each and every one is switch from using the new System.Text.Json package back to using Newtonsoft.Json.

In almost all of them I’ve actually tried to keep going with System.Text.Json, but in the existing projects I haven’t had time to switch out things like custom JsonConverters or Newtonsoft.Json specific attributes on my models.

In new projects, I always get to the point where I just know how to do it in Newtonsoft. And as much as I want to try this shiny new thing, I have my own deadlines which don’t quite allow me to fiddle about with new toys.

So if you’re in the same boat as me and just need to get something out the door. The first thing you need is to install the following Nuget package :

Install-Package Microsoft.AspNetCore.Mvc.NewtonsoftJson

Then you need to add a specific call to your IMVCBuilder. This will differ depending on how you have set up your project. If you are migrating from an existing project you’ll have a call to “AddMvc()” which you can then tack onto it like so :

services.AddMvc().AddNewtonsoftJson();

However in new .NET Core 3+ projects, you have a different set of calls replace MVC. So you’ll probably have one of the following :

services.AddControllers().AddNewtonsoftJson();
services.AddControllersWithViews().AddNewtonsoftJson();
services.AddRazorPages().AddNewtonsoftJson();

If this is an API you will likely have AddControllers, but depending on your project setup you could have the others also. Tacking on AddNewtonsoftJson()  to the end means it will “revert” back to using Newtonsoft over System.Text.Json

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently had the opportunity to start a Specflow/Selenium end to end testing project from scratch and my gosh it’s been fun. I’m one of those people that absolutely love unit tests and trying to “trick” the code with complicated scenarios. End to end testing with Selenium is like that but on steroids. Seeing the browser flash infront of you and motor through tests is an amazing feeling.

But in saying that. A key part of using Selenium is the “ChromeWebDriver”. It’s the tool that actually allows you to manipulate the Google Chrome browser through selenium. And let me tell you, there are a few headaches getting this set up that I really didn’t expect. Version errors, Not finding the right exe, Nuget packages that actually include the exe but it’s the wrong version or can’t be found. Ugh.

If you are not that big into automation testing, you can probably skip this whole post. But if you use Specflow/Selenium even semi-regularly, I highly recommend bookmarking this post because I’m 99% sure you will hit atleast one of these bugs when setting up a new testing project.

Chrome, Gecko  and IE Drivers

While the below is mostly about using ChromeDriver, some of this is also applicable for Gecko (Firefox), and IE drivers. Obviously the error messages will be slightly different, but it’s also highly likely you will run into very similar issues.

Adding ChromeDriver.exe To Your Project

The first thing to note is that you’ve probably added the “Selenium.WebDriver” and maybe “Specflow” nuget packages. These however *do not* contain the actual ChromeDriver executable. They only contain the C# code required to interact with the driver, but *not* the driver itself. It is incredibly confusing at first but kinda makes sense because you may want to only use Chrome or only Firefox or a combination etc. So it’s left up to you to actually add the required driver EXEs.

If you try and run your selenium tests without it, it will actually compile all fine and look like starting only to bomb out with :

The chromedriver.exe file does not exist in the current directory or in a directory on the PATH environment variable.

Depending on your setup, it can also bomb out with :

The file C:\path\to\my\project\chromedriver.exe does not exist. 
The driver can be downloaded at http://chromedriver.storage.googleapis.com/index.html

So there are two ways to add ChromeDriver to your project. The first is that you can install a nuget package that will write it to your bin folder when building. The most common nuget package that does this is here : https://www.nuget.org/packages/Selenium.WebDriver.ChromeDriver/

But a quick note, as we will see below, this only works if everywhere you run the tests has the correct version of chrome that matches the driver. What?! You didn’t know that? That’s right. The version of ChromeDriver.exe will have a version like 79.0.1.1 that will typically only be able to run on machines that have chrome version 79 installed. The nuget package itself is typically marked with the version of Chrome you need, so it’s easy to figure out, but can still be a big pain in the butt to get going.

So with that in mind, the other option is to actually download the driver yourself from the chromium downloads page : https://chromedriver.chromium.org/downloads

You need to then drop the exe into your project. And make sure it’s set to copy if newer for your build. Then when building, it should show up in your bin folder. Personally, I found the manual download of the chromium driver to be handy when working in an enterprise environment where the version of chrome might be locked down by some group policy, or you are working with others who may have wildly different versions of chrome and you can do funky things like have different versions for different developers.

Passing The ChromeDriver Location

So you’ve downloaded ChromeDriver and when you build, you can see it in your bin folder, but everything is still blowing up with the same error, what gives?!

One of the more irritating things I found is that in so many tutorials, they new’d up a chromedriver instance like so :

ChromeDriver = new ChromeDriver();

Now this may have worked in .NET Framework (I haven’t tried), but atleast for me in .NET Core, this never works. I think there must be something inside the constructor of ChromeDriver that looks up where it’s current executable is running (e.g. where the Bin folder is), and in .NET Core this must be different from Full Framework.

In anycase, you can change the constructor to instead take the folder location where it can find the driver. In my case I want that to be the bin folder :

ChromeDriver = new ChromeDriver(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location));

Obviously you can go ahead and change the path to anything which is super handy for differing dev setups. For example you could ask each dev to maintain their own version of chromedriver.exe somewhere on their C:\ drive, and then just pass that location into the constructor. Meaning that each developer can have a completely different version of chrome, and things will still run perfectly fine.

Versions Matter

We kinda touched on it above, but versions of ChromeDriver have to match the actual version of Chrome on the machine. If you are getting errors like so :

session not created: This version of ChromeDriver only supports Chrome version XX

Then you have a mismatch between versions.

The easiest way to rectify the issue is to manually download the correct version of ChromeDriver from here : https://chromedriver.chromium.org/downloads and force your code to use it. If you are using a nuget package for the driver, then it’s highly likely you would need to switch away from it to a manual setup to give you better control over versioning.

Azure Devops (And Others) Have ChromeDriver Environment Variables

This is one that I really wish I knew about sooner. When I tried to run my Selenium tests on Azure Devops, I was getting version issues where the version of Chrome on my hosted build agent was just slightly different from the one on my machine. I tried to do all sorts of crazy things by swapping our the exe version etc, but then I found buried in a help doc that there is actually an environment variable named ChromeWebDriver that has the full path to a chromedriver that is guaranteed to match that of the chrome browser on the agent. So I wrote some quick code that if I was running inside Azure Devops, to grab that environment variable and pass that into my ChromeDriver constructor.

Again, this is only for Azure Devops. But if you are using Gitlab, Bamboo, TeamCity, whatever! Check to see if there is an environment variable on hosted agents that carries the location of ChromeDriver.

If you are using your own build agents, then it’s also a good idea to think about following the same pattern. It’s super handy to have the Build Agent look after it’s own versions rather than wrangling something in code to fudge it all.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Opening Excel files in code has been a painful experience long before .NET Core came along. In many cases, you actually needed the Excel application installed on the target/users machine to be able to open excel files via code. If you’ve ever had to use those “OLE DB Jet ” queries before, you know it’s not a great experience. Luckily there are some pretty good open source solutions now that don’t require excel on the target machine. This is good for Windows users so that you don’t have to install excel on a target users machine or web server, but also for people hosting .NET Core applications on Linux (And even Mac/ARM) – where Excel is obviously no where to be seen!

My methodology for this article is pretty simple. Create a standardized excel workbook with a couple of sheets, couple of formulas, and a couple of special formatting cases. Read the same data out in every single library and see which one works the best for me. Simple! Let’s get going!

Note On CSV Formats

I should note that if you are reading a CSV, or more so a single excel sheet that doesn’t have formulas or anything “excel” specific on the sheet, you should instead just parse it using standard CSV technique. We have a great article here on parsing CSV in .NET Core that you should instead follow. CSV parsers are great for taking tabular data and deserializing it into objects and should be used where they can.

Example Data

I figure the best way to compare the different libraries on offer is to create a simple spreadsheet to compare the different ways we can read data out. The spreadsheet will have two “sheets”, where the second sheet references the first.

Sheet 1 is named “First Sheet” and looks like so :

Notice that cell A2 is simply the number “1”. Then in column B2, we have a reference to cell A2. This is because we want to check if the libraries allow us to not only get the “formula” from the cell, but also what the computed value should be.

We are also styling cell A2 with a font color of red, and B2 has a full border (Although hard to see as I’m trying to show the fomula). We will try and extract these styling elements out later.

Sheet 2 is named “Second Sheet” and looks like so :

So we are doing a simple “SUM” formula and referencing the first sheet. Again, this is so we can test getting both the formula and the computed value, but this time across different sheets. It’s not complicated for a person used to working with Excel, but let’s see how a few libraries handle it.

In general, in my tests I’m looking for my output to always follow the same format of :

Sheet 1 Data
Cell A2 Value   : 
Cell A2 Color   :
Cell B2 Formula :
Cell B2 Value   :
Cell B2 Border  :

Sheet 2 Data
Cell A2 Formula :
Cell A2 Value   :

That way when I show the code, you can pick the library that makes the most sense to you.

EPPlus

When I first started hunting around for parsing excel in .NET Core, I remembered using EPPlus many moons ago for some very lightweight excel parsing. The nuget package can be found here : https://www.nuget.org/packages/EPPlus/. It’s also open source so you can read through the source code if that’s your thing here : https://github.com/JanKallman/EPPlus

The code to read our excel spreadsheet looks like so :

static void Main(string[] args)
{
    using(var package = new ExcelPackage(new FileInfo("Book.xlsx")))
    {
        var firstSheet = package.Workbook.Worksheets["First Sheet"];
        Console.WriteLine("Sheet 1 Data");
        Console.WriteLine($"Cell A2 Value   : {firstSheet.Cells["A2"].Text}");
        Console.WriteLine($"Cell A2 Color   : {firstSheet.Cells["A2"].Style.Font.Color.LookupColor()}");
        Console.WriteLine($"Cell B2 Formula : {firstSheet.Cells["B2"].Formula}");
        Console.WriteLine($"Cell B2 Value   : {firstSheet.Cells["B2"].Text}");
        Console.WriteLine($"Cell B2 Border  : {firstSheet.Cells["B2"].Style.Border.Top.Style}");
        Console.WriteLine("");

        var secondSheet = package.Workbook.Worksheets["Second Sheet"];
        Console.WriteLine($"Sheet 2 Data");
        Console.WriteLine($"Cell A2 Formula : {secondSheet.Cells["A2"].Formula}");
        Console.WriteLine($"Cell A2 Value   : {secondSheet.Cells["A2"].Text}");
    }
}

Honestly what can I say. This was *super* easy and worked right out of the box. It picks up formulas vs text perfectly! The styles on our first sheet was also pretty easy to get going. The border is slightly annoying because you have to check the “Style” of the border, and if it’s a style of “None”, then it means there is no border (As opposed to a boolean for “HasBorder” or similar). But I think I’m just nit picking, EPPlus just works!

NPOI

NPOI is another open source option with a Github here : https://github.com/tonyqus/npoi and Nuget here : https://www.nuget.org/packages/NPOI/. It hasn’t had a release in over a year which isn’t that bad because it’s not like Excel itself has tonnes of updates throughout the year, but the Issues list on Github is growing a bit with a fair few bugs so keep that in mind.

The code to read our data using NPOI looks like so :

…..

…Actually you know what. I blew a bunch of time on this to try and work out the best way to use NPOI and the documentation is awful. The wiki is here : https://github.com/tonyqus/npoi/wiki/Getting-Started-with-NPOI but it has a few samples but most/all of them are about creating excel workbooks not reading them. I saw they had a link to a tutorial on how to read an Excel file which looked promising, but it was literally reading the spreadsheet and then dumping the text out.

After using EPPlus, I just didn’t see any reason to continue with this one. Almost every google answer will lead you to StackOverflow with people using NPOI with such specific use cases that it never really all pieced together for me.

ExcelDataReader

ExcelDataReader appeared in a couple of stackoverflow answers on reading excel in .NET Core. Similar to others in this list, it’s open source here : https://github.com/ExcelDataReader/ExcelDataReader and on Nuget here : https://www.nuget.org/packages/ExcelDataReader/

I wanted to make this work but…. It just doesn’t seem intuitive at all. ExcelDataReader works on the premise that you are reading “rows” and “columns” sequentially in almost a CSV fashion. That sort of works but if you are looking for a particular cell, it’s rough as hell.

Some example code :

static void Main(string[] args)
{
    System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
    using (var stream = File.Open("Book.xlsx", FileMode.Open, FileAccess.Read))
    {
        using (var reader = ExcelReaderFactory.CreateReader(stream))
        {
            do
            {
                while (reader.Read()) //Each ROW
                {
                    for (int column = 0; column < reader.FieldCount; column++)
                    {
                        //Console.WriteLine(reader.GetString(column));//Will blow up if the value is decimal etc. 
                        Console.WriteLine(reader.GetValue(column));//Get Value returns object
                    }
                }
            } while (reader.NextResult()); //Move to NEXT SHEET

        }
    }
}

The first line in particular is really annoying (Everything blows up without it). But you’ll notice that we are moving through row by row (And sheet by sheet) trying to get values. Ontop of that, calling things like “GetString” doesn’t work if the value is a decimal (Implicit casts would have been better IMO). I also couldn’t find any way to get the actual formula of the cell. The above only returns the computed results.

I was going to slog my way through and actually get the result we were looking for, but it’s just not a library I would use.

Syncfusion

Syncfusion is one of those annoying companies that create pay-to-use libraries, upload them to nuget, and then in small print  say you need to purchase a license or else. Personally, I would like to see Microsoft not allow paid libraries into the public Nuget repo. I’m going to include them here but their licensing starts at $995 per year, per developer, so I don’t see much reason to use it for the majority of use cases. The nuget page can be found here https://www.nuget.org/packages/Syncfusion.XlsIO.Net.Core/

The code looks like :

static void Main(string[] args)
{
    ExcelEngine excelEngine = new ExcelEngine();
    using (var stream = File.Open("Book.xlsx", FileMode.Open, FileAccess.Read))
    {
        var workbook = excelEngine.Excel.Workbooks.Open(stream);

        var firstSheet = workbook.Worksheets["First Sheet"];
        Console.WriteLine("Sheet 1 Data");
        Console.WriteLine($"Cell A2 Value   : {firstSheet.Range["A2"].DisplayText}");
        Console.WriteLine($"Cell A2 Color   : {firstSheet.Range["A2"].CellStyle.Font.RGBColor.Name}");
        Console.WriteLine($"Cell B2 Formula : {firstSheet.Range["B2"].Formula}");
        Console.WriteLine($"Cell B2 Value   : {firstSheet.Range["B2"].DisplayText}");
        Console.WriteLine($"Cell B2 Border  : {firstSheet.Range["B2"].CellStyle.Borders.Value}");
        Console.WriteLine("");

        var secondSheet = workbook.Worksheets["Second Sheet"];
        Console.WriteLine($"Sheet 2 Data");
        Console.WriteLine($"Cell A2 Formula : {secondSheet.Range["A2"].Formula}");
        Console.WriteLine($"Cell A2 Value   : {secondSheet.Range["A2"].DisplayText}");
    }
}

So not bad. I have to admit, I fiddled around trying to understand how it worked out borders (As the above code doesn’t work), but gave up. The font color also took some fiddling where the library returns non standard objects as the color. Some of the properties for the actual data are also a bit confusing where you have value, text, displaytext etc. All returning slightly different things so you sort of have to just spray and pray and see which one works.

If EPPlus didn’t exist, and Syncfusion wasn’t fantastically overpriced, this library would actually be pretty good.

TL;DR;

Use EPPlus. https://github.com/JanKallman/EPPlus

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This article is part of a series on creating Windows Services in .NET Core.

Part 1 – The “Microsoft” Way
Part 2 – The “Topshelf” Way
Part 3 – The “.NET Core Worker” Way


This article has been a long time coming I know. I first wrote about creating Windows Services in .NET Core all the way back in September (3 months ago). And on that post a helpful reader (Shout out to Saeid!) immediately commented that in .NET Core 3, there is a brand new way to create Windows Services! Doh! It reminds me of the time I did my 5 part series on Azure Webjobs in .NET Core, and right as I was completing the final article, a new version of the SDK got released with a tonne of breaking changes making me have to rewrite a bunch.

Thankfully, this isn’t necessarily a “breaking change” to how you create Windows Services, the previous articles on doing it the “Microsoft Way” and the “Topshelf Way” are still valid, but this is just another way to get the same result (Maybe with a little less cursing to the programming gods).

Looking to monitor your service?

Just a quick note from our sponsor elmah.io. Once your Windows Service is actually up and running, how are you intending to monitor it? elmah.io provides an awesome feature called “Heartbeats” to monitor how your background tasks are getting on, and if they are running as they should be. elmah.io comes with a 21 day, no credit card required trial so there’s no reason not to jump in and have a play.

The Setup

The first thing you need to know is that you need .NET Core 3.0 installed. At the time of writing, .NET Core 3.1 has just shipped and Visual Studio should be prompting you to update anyway. But if you are trying to do this in a .NET Core 2.X project, it’s not going to work.

If you like creating projects from the command line, you need to create a new project as the type “worker” :

dotnet new worker

If you are a Visual Studio person like me, then there is actually a template inside Visual Studio that does the exact same thing.

Doing this creates a project with essentially two files. You will have your program.cs which is basically the “bootstrapper” for your app. And then you have something called worker.cs which is where the logic for your service goes.

It should be fairly easy to spot, but to add extra background services to this program to run in parallel, you just need to create a new class that inherits from BackgroundService :

public class MyNewBackgroundWorker : BackgroundService
{
    protected override Task ExecuteAsync(CancellationToken stoppingToken)
    {
        //Do something. 
    }
}

Then in our program.cs file, we just add the worker to our service collection :

.ConfigureServices((hostContext, services) =>
{
    services.AddHostedService<Worker>();
    services.AddHostedService<MyNewBackgroundWorker>();
});

AddHostedService has actually been in the framework for quite some time as a “background service” type task runner that typically runs underneath your web application. We’ve actually done an article on hosted services in ASP.NET Core before, but in this case, the hosted service is basically the entire app rather than it being something that runs behind the scenes of your web app.

Running/Debugging Our Application

Out of the box, the worker template has a background service that just pumps out the datetime to the the console window. Let’s just press F5 on a brand new app and see what we get.

info: CoreWorkerService.Worker[0]
      Worker running at: 12/07/2019 08:20:30 +13:00
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.

We are up and running immediately! We can leave our console window open to debug the application, or close the window to exit. Compared to the hell we went through trying to debug our Windows Service when creating it the “Microsoft” way, this is like heaven.

Another thing to note is that really what we have infront of us is a platform for writing console applications. In the end we are only writing out the time to the console window, but we are also doing that via Dependency Injection creating a hosted worker. We can use this DI container to also inject in repositories, set environments, read configuration etc.

The one thing it’s not yet is a windows service…

Turning Our App Into A Windows Service

We need to add the following package to our app :

Install-Package Microsoft.Extensions.Hosting.WindowsServices

Next, head to our program.cs file and modify it by adding a call to “UseWindowsService()”.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureServices((hostContext, services) =>
    {
        services.AddHostedService<Worker>();
    }).UseWindowsService();

And that’s it!

Running our application normally is just the same and everything functions as it was. The big difference is that we can now install everything as a service.

To do that, first we need to publish our application. In the project directory we run :

dotnet publish -r win-x64 -c Release

Note in my case, I’m publishing for Windows X64 which generally is going to be the case when deploying a Windows service.

Then all we need to do is run the standard Windows Service installer. This isn’t .NET Core specific but is instead part of Windows :

sc create TestService BinPath=C:\full\path\to\publish\dir\WindowsServiceExample.exe

As always, the other commands available to you (including starting your service) are :

sc start TestService
sc stop TestService
sc delete TestService

And checking our services panel :

It worked!

Installing On Linux

To be honest, I don’t have a hell of a lot of experience with Linux. But the general gist is…

Instead of installing Microsoft.Extensions.Hosting.WindowsServices , you need to install Microsoft.Extensions.Hosting.Systemd . And then instead of calling UseWindowsService()  you’ll instead call UseSystemd() .

Obviously your dotnet publish and installation commands will vary, but more or less you can create a “Windows Service” that will also run on Linux!

Microsoft vs Topshelf vs .NET Core Workers

So we’ve now gone over 3 different ways to create Windows Services. You’re probably sitting there going “Well… Which one should I chose?”. Immediately, let’s bin the first Microsoft old school way of going things. It’s hellacious to debug and really doesn’t have anything going for it.

That leaves us with Topshelf and .NET Core workers. In my opinion, I like .NET Core Workers for fitting effortlessly into the .NET Core ecosystem. If you’re already developing in ASP.NET Core, then everything just makes sense creating a worker. On top of that, when you create a BackgroundService, you can actually lift and shift that to run inside an ASP.NET Core website at any point which is super handy. The one downside is the installation. Having to use SC commands can be incredibly frustrating at times and Topshelf definitely has it beat there.

Topshelf in general is very user friendly and has the best installation process for Windows Services. But it’s also another library to add to your list and another “framework” to learn, which counts against it.

Topshelf or .NET Core Workers, take your pick really.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

These days, people who are considered “fullstack” developers are considered to be unicorns. That is, it’s seemingly rare for new grad developers to be fullstack, and instead they are picking a development path and sticking with it. It wasn’t really like that 10-15 years ago. When I first started developing commercially I obviously got to learning ASP.NET (Around .NET 2-ish), but you better believe I had to learn HTML/CSS and the javascript “framework” of the day – jQuery. There was no “fullstack” or “front end” developers, you were just a “developer”.

Lately my javascript framework of choice has been Angular. I started with AngularJS 1.6 and took a bit of a break, but in the last couple of years I’ve been working with Angular all the way up to Angular 8. Even though Angular’s documentation is actually pretty good, there’s definitely been a few times where I feel I’ve cracked a chestnut of a problem and thought “I should really start an Angular blog to share this”. After all, that’s exactly how this blog started. In the dark days of .NET Core (before the first release even!), I was blogging here trying to help people running into the same issues as me.

And so, I’ve started Tutorials For Angular. I’m not going to profess to be a pro in Angular (Or even javascript), but I’ll be sharing various tips and tricks that I’ve run into in my front end development journey. Content is a bit light at the moment but I have a few long form articles in the pipeline on some really tricky stuff that stumped me when I got back into the Angular groove, so if that sounds like you, come on over and join in!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Rate limiting web services and APIs is nothing new, but I would say over the past couple of years the “leaky bucket” strategy of rate limiting has rised in popularity to the point where it’s almost a defacto these days. The leaky bucket strategy is actually quite simple – made even easier by the fact that it’s name comes from a very physical and real world scenario.

What Is A Leaky Bucket Rate Limit?

Imagine a 4 litre physical bucket that has a hole in it. The hole leaks out 1 litre of water every minute. Now you can dump 4 litres of water in the bucket all at once just fine, but still it will flow out at 1 litre per minute until it’s empty. If you did fill it immediately, after 1 minute, you could make 1 request but then have to wait another minute. Or you could wait 2 minutes and make 2 at once etc.  The general idea behind using a leaky bucket scenario over something like “1 call per second” is that it allows you to “burst” through calls until the bucket is full, and then wait a period of time until the bucket drains. However even while the bucket is draining you can trickle in calls if required.

I generally see it applied on API’s that to complete a full operation may take 2 or 3 API calls, so limiting to 1 per second is pointless. But allowing a caller to burst through enough calls to complete their operation, then back off, is maybe a bit more realistic.

Implementing The Client

This article is going to talk about a leaky bucket “client”. So I’m the one calling a web service that has a leaky bucket rate limitation in place. In a subsequent article we will look at how we implement this on the server end.

Now I’m the first to admit, this is unlikely to win any awards and I typically shoot myself in the foot with threadsafe coding, but here’s my attempt at a leaky bucket client that for my purposes worked a treat, but it was just for a small personal project so your mileage may vary.

class LeakyBucket
{
    private readonly BucketConfiguration _bucketConfiguration;
    private readonly ConcurrentQueue<DateTime> currentItems;
    private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);

    private Task leakTask;

    public LeakyBucket(BucketConfiguration bucketConfiguration)
    {
        _bucketConfiguration = bucketConfiguration;
        currentItems = new ConcurrentQueue<DateTime>();
    }

    public async Task GainAccess(TimeSpan? maxWait = null)
    {
        //Only allow one thread at a time in. 
        await semaphore.WaitAsync(maxWait ?? TimeSpan.FromHours(1));
        try
        {
            //If this is the first time, kick off our thread to monitor the bucket. 
            if(leakTask == null)
            {
                leakTask = Task.Factory.StartNew(Leak);
            }

            while (true)
            {
                if (currentItems.Count >= _bucketConfiguration.MaxFill)
                {
                    await Task.Delay(1000);
                    continue;
                }

                currentItems.Enqueue(DateTime.UtcNow);
                return;
            }
        }finally
        {
            semaphore.Release();
        }
                
    }

    //Infinite loop to keep leaking. 
    private void Leak()
    {
        //Wait for our first queue item. 
        while(currentItems.Count == 0)
        {
            Thread.Sleep(1000);
        }

        while(true)
        {
            Thread.Sleep(_bucketConfiguration.LeakRateTimeSpan);
            for(int i=0; i < currentItems.Count && i < _bucketConfiguration.LeakRate; i++)
            {
                DateTime dequeueItem;
                currentItems.TryDequeue(out dequeueItem);
            }
        }
    }
}

class BucketConfiguration
{
    public int MaxFill { get; set; }
    public TimeSpan LeakRateTimeSpan { get; set; }
    public int LeakRate { get; set; }
}

There is a bit to unpack there but I’ll do my best.

  • We have a class called BucketConfiguration which specifies how full the bucket can get, and how much it leaks.
  • Our main method is called “GainAccess” and this will be called each time we want to send a request.
  • We use a SemaphoreSlim just incase this is used in a threaded scenario so that we queue up calls and not get tangled up in our own mess
  • On the first call to gain access, we kick off a thread that is used to “empty” the bucket as we go.
  • Then we enter in a loop. If the current items on the queue is the max fill, then we just wait a second and try again.
  • When there is room on the queue, we pop our time on and return, thus gaining access.

Now I’ve used a queue here but you really don’t need to, it’s just helpful debugging which calls we are leaking etc. But really a blockingcollection or something similar is just as fine. Notice that we also kick off a thread to do our leaking. Because it’s done at a constant rate, we need a dedicated thread to be “dripping” out requests.

And finally, everything is async including our semaphore (If you are wondering why I didn’t just use the *lock* keyword, it can’t be used with async code). This means that we hopefully don’t jam up threads waiting to send requests. It’s not foolproof of course, but it’s better than hogging threads when we are essentially spinwaiting.

The Client In Action

I wrote a quick console application to show things in action. So for example :

static async Task Main(string[] args)
{
    LeakyBucket leakyBucket = new LeakyBucket(new BucketConfiguration
    {
        LeakRate = 1, 
        LeakRateTimeSpan = TimeSpan.FromSeconds(5), 
        MaxFill = 4
    });

    while (true)
    {
        await leakyBucket.GainAccess();
        Console.WriteLine("Hello World! " + DateTime.Now);
    }
}

Running this we get :

Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:26 PM
Hello World! 24/11/2019 5:08:31 PM
Hello World! 24/11/2019 5:08:36 PM
Hello World! 24/11/2019 5:08:41 PM
Hello World! 24/11/2019 5:08:46 PM

Makes sense. We do our run of 4 calls immediately, but then we have to back off to doing just 1 call every 5 seconds.

That’s pretty simple, but we can also handle complex scenarios such as leaking 2 requests instead of 1 every 5 seconds. We change our leaky bucket to :

LeakyBucket leakyBucket = new LeakyBucket(new BucketConfiguration
{
    LeakRate = 2, 
    LeakRateTimeSpan = TimeSpan.FromSeconds(5), 
    MaxFill = 4
});

And what do you know! We see our burst of 4 calls, then every 5 seconds we see us drop in another 2 at once.

Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:09 PM
Hello World! 24/11/2019 5:10:14 PM
Hello World! 24/11/2019 5:10:14 PM
Hello World! 24/11/2019 5:10:19 PM
Hello World! 24/11/2019 5:10:19 PM
Hello World! 24/11/2019 5:10:24 PM
Hello World! 24/11/2019 5:10:24 PM

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.