If you’re up to date on your dotnet tooling, then you are probably using the very latest Dotnet Nuget command (dotnet add package). And then on top of that you are wondering where the hell are all your packages, and where is your packages.config? I know atleast for me it was extremely confusing adding a nuget package and not seeing a packages folder fill up. So let’s run through some of the changes.

What Is The Dotnet CLI Command “dotnet add package” Actually Doing?

So when you run the command “dotnet add package”, what is it doing under the hood? Well actually, when it boils down to it, it’s actually just forwarding your command to the “nuget.exe” anyway. If you dig into the source code eventually you end up at a class called “NugetForwardingApp” (Source) that takes your dotnet command and transforms it to the original nuget command anyway. This makes sense as nuget will still be used with the .net full framework so it’s pointless for the .net core team to completely rebuild a package manager from scratch right?

As a side note, you’ll often find this with dotnet CLI commands. Often they are just a wrapper or a skin over things like nuget or msbuild to make your life a little easier (Because who hasn’t wasted a day trying to get a finicky msbuild command to work!).

Where Is My packages.config?

Previously when you added a nuget package, you would end up with a packages.config file in the root of your project directory that told nuget what dependencies you had. You’ll quickly notice that this is not the case anymore, so where are your dependencies defined? In the .csproj of course. Let’s say I create a new project with a reference to EntityFramework, my csproj would look a bit like the following :

The csproj format that is used for .net core projects is now very very clean, so it makes viewing your dependencies as easy as viewing your packages.json. The other benefit is that you have all your dependencies in a single file. When you add project references (Or hardcoded DLL references), it’s all contained within the csproj.

Where Is My Packages Folder?

You are probably used to having a packages folder in your solution that holds the nuget downloads. This is now gone and been replaced by a global packages folder. By default this is located at “%userprofile%\.nuget\packages”. Opening this folder the first time might give you a heart attack with how much is in here, and I suspect that “cleaning” this folder in the future is going to be the fix for various bugs.

It certainly makes sense to have a global folder on some levels though as for many projects you will be using the exact same version of the same library across many projects. With ASP.net core itself being inside nuget, it also makes sense to not have to download this over and over for each project.

It’s also important to note that the location of “%userprofile%\.nuget\packages” is just the default, you are able to edit your nuget.config either on the global level or per project level to specify another location. But be aware if you do this on the project level and give each project it’s own packages folder, you will download all of ASP.net core into that packages folder all over again.

What Happens When I Publish?

When it comes to publishing, there is really nothing to worry about. When you publish, .net core works out what packages you need and moves them into your publish destination folder for a stand alone solution. You don’t for example need this packages folder on your server, it’s only for development purposes.

Editing Nuget.config

As discussed earlier, you can edit your nuget.config file to change your local global cache location, add nuget feeds, and configure other settings such as HTTP Proxies, default package versions etc. The Microsoft documentation for editing your nuget.config can be found here. A handy tip to generating a nuget file at the project level is the following command from a command line/bash/terminal.

Be aware that adding a nuget file at the project level (even a completely empty one) can have bad side effects. I’ve found that settings don’t exactly “fall through”, e.g. If a setting is missing from the project level nuget.config, it doesn’t then go to the global config, it just treats it as false/missing etc.

While .net core ships with the service collection dependency injection framework as a first class citizen, some developers may still prefer to stick with their third party DI framework they used in full framework .net. And for good reason, the .net core DI is rather basic when it comes to options for injecting and auto binding. Things like auto binding all interfaces, or being able to inject different services based on parameter names, class names etc are all lost.

Luckily Autofac, LightInject, DryIOC and StructureMap do have the ability to be used in .net core (With others slowly making the move), but there are a couple of gotchas that you need to go through. Namely you have to remember that things like IOptions, ILogger, HttpContext and other framework classes need to be available in the DI framework by default. On top of that, libraries that already use the ServiceCollection of .net core need to be able to be dropped in and used as per normal.

Almost all third party DI frameworks have adopted the same pattern for .net core. The general steps for any third party DI library are :

  • ConfigureServices in your startup.cs file should return IServiceProvider not void
  • You should register framework services as you normally would within the .net core service collection
  • Build your IOC container and register services as normal
  • Push the .net core services collection into the IOC container
  • Build your container and return the service provider

Sound complicated? Don’t worry, sample code will be provided for each DI framework as well as any other gotchas you have.

Autofac In ASP.net Core

First install the following Nuget package :

Here is the code for your startup.cs :

DryIOC In ASP.net Core

First install the following Nuget package :

DryIOC prefers to keep it’s registrations out of the startup.cs file (Which is probably a good idea). So create a class called “CompositionRoot”, that has a constructor that takes IRegistrator, and register your services inside there.

Then your startup.cs should look like the following :

LightInject In ASP.net Core

First install the following Nuget package :

Here is the code for your startup.cs:

StructureMap In ASP.net Core

First install the following Nuget package :

Note I also had an issue where I had to install StructureMap itself manually for some reason. If things aren’t working correctly (e.g. Intellisense is going wild), install the following Nuget package.

StructureMap is another IOC container that prefers a separate class(s) to handle your registrations. Let’s just create a simple ServicesRegistry class.

In your startup.cs add the following :

Unity In ASP.net Core

Unity is close to being dead in the water. It would appear according this Github issue that .net core support won’t be coming any time soon.

Windsor In ASP.net Core

At the time of writing, Windsor does not have support for ASP.net core. You can follow the progress in the Github Issue here.

Ninject In ASP.net Core

At the time of writing, Ninject does not have support for ASP.net core

If you’ve never used a dependency injection framework before, then the new Services DI built into .net core could be a bit daunting. Most of all, understanding the differences between transient, singleton and scoped service registrations can be easy to begin with, but tough to master. It seems simple on the service, “register this interface as this service”, but there is a couple of gotchas along the way. Hopefully after reading this, you will have a better grasp on the different types of lifetimes you can use within your application, and when to use each one.

Transient Lifetime

If in doubt, make it transient. That’s really what it comes down to. Adding a transient service means that each time the service is requested, a new instance is created.

In the example below, I have created a simple service named “MyService” and added an interface. I register the service as transient and ask for the instance twice. In this case I am asking for it manually, but in most cases you will be asking for the service in the constructor of a controller/class.

This passes with flying colors. The instances are not the same and the .net core DI framework creates a new instance each time. If you were creating instances of services manually in your code without a DI framework, then transient lifetime is going to be pretty close to a drop in.

One thing that I should add is that there was a time when it was all the rage to stop using Transient lifetimes, and try and move towards using singletons (Explained below). The thinking was that instantiating a new instance each time a service was requested was a performance hit. Personally in my experience this only happened on huge monoliths with massive/complex dependency trees. The majority of cases that I saw trying to avoid Transient lifetimes ended up breaking functionality because using Singletons didn’t function how they thought it would. I would say if you are having performance issues, look elsewhere.

Singleton Lifetime

A singleton is an instance that will last the entire lifetime of the application. In web terms, it means that after the initial request of the service, every subsequent request will use the same instance. This also means it spans across web requests (So if two different users hit your website, the code still uses the same instance). The easiest way to think of a singleton is if you had a static variable in a class, it is a single value across multiple instances.

Using our example from above :

We are now adding our service as a singleton and our Assert statement from before now blows up because the two instances are actually the same!

Now why would you ever want this? For the most part, it’s great to use when you need to “share” data inside a class across multiple requests because a singleton holds “state” for the lifetime of the application. The best example was when I needed to “route” requests in a round robin type fashion. Using a singleton, I can easily manage this because every request is using the same instance.

Transient Inside Singletons

Now the topic of “transient inside singletons” probably deserves it’s own entire article. It’s the number one bug when people start introducing singletons to their applications. Consider the following two classes.

So we have two services, one named “MySingletonService” and one named “MyTransientService” inside it.

Then our services registration looks like the following.

If we ran this, what could we expect? It actually blows up on the second Assert. But why? In our very first example at the top of the page, when we registered a service as Transient they weren’t the same but now they are? What gives?!

The reason lies in the wording of how DI works. Transient creates a new instance of the service each time the service is requested. When we first request an instance of the parent class as singleton, it creates that instance and all it’s dependencies (In this case our transient class). The second time we request that singleton class, it’s already been created for us and so doesn’t go down the tree creating dependencies, it simply hands us the first instance back. Doh! This is also true of other types of lifetimes like scoped inside singletons.

Where does this really bite you? In my worst experience with Singletons, someone had decided the smart thing to do in their application was make the entire service layer singletons. But what that meant was that all the database repository code, entity framework contexts, and many other classes that should really be transient in nature then became a single instance being passed around between requests.

Again, use Singleton if you have a real use for it. Don’t make things singleton because you think it’s going to save on performance.

Scoped Lifetime

Scoped lifetime objects often get simplified down to “one instance per web request”, but it’s actually a lot more nuanced than that. Admittedly in most cases, you can think of scoped objects being per web request. So common things you might see is a DBContext being created once per web request, or NHibernate contexts being created once so that you can have the entire request wrapped in a transaction. Another extremely common use for scoped lifetime objects is when you want to create a per request cache.

Scoped lifetime actually means that within a created “scope” objects will be the same instance. It just so happens that within .net core, it wraps a request within a “scope”, but you can actually create scopes manually. For example :

In this example, the two scoped objects aren’t the same because created each object within their own “scope”. Typically in a simple .net core CRUD API, you aren’t going to be manually creating scopes like this. But it can come to the rescue in large batch jobs where you want to “ditch” the scope each loop for example.

Instance Lifetime

In early versions of .net core (And other DI frameworks), there was an “Instance” lifetime. This allowed you to create the instance of a class instead of letting the DI framework build it. But what this actually meant was that it essentially became a “singleton” anyway because it was only “created” once. Because of this Microsoft removed the Instance lifetime and recommended you just use AddSingleton like the following.


Logging in any application is always a contentious issue with many developers rolling their own framework and tacking on third party libraries such as Nlog or Log4net. While these approaches are fine, Microsoft have come in and made logging a first class citizen in ASP.net core. What that means is a standardized way of logging that is both simple and easy to get up and running, but also extensible when you have more complicated logging needs.

The Basics

This guide assumes you have a basic project up and running (Even just a hello world app), just to test how logging works. For getting up and running, we are going to use the debug logger. For that, you need to install the following Nuget package :

In your startup.cs file, in the Configure method, you need to add a call to AddDebug on your logger factory. This may already be done for you depending on which code template you are using, but in the end it should end up looking a bit like the following :

Let’s try this out first. Run your web project and view any URL. Back in Visual Studio you should view the “Output” window. If you can’t see it, go Debug -> Windows -> Output. After viewing any URL you should see something pretty similar to :

The content isn’t too important at this stage, but the import thing is that you are seeing logging in action! Of course it is a bit verbose and over the top, but we will be able to filter these out later.

That sort of automatic logging is great, but what we really want is to be able to log things ourselves. Things like logging an exception or fatal error that someone should look into immediately. Let’s go into our controller. First we need to add a dependency on ILogger in our constructor and then we need to actually log something. In the end it looks a bit like this :

Great, now when we go into this action what do we see in the debug window?

You will notice that our log messages are shown with the level that they are logged at, in this case Information and Error. At this point it doesn’t make too much a difference but it will do in our next section.

See how easy that was? No more fiddling around with nuget packages and building abstractions, the framework has done it all for you.

Logging Levels

The thing is, if we rolled this into production (And used a logger that pushes to the database), we are going to get inundated with logging about every single request that’s coming through. It creates noise that means you can’t find the logs you need, and worse, it may even cause a performance issue with that many writes.

An interesting thing about even the Microsoft packaged loggers is that there isn’t necessarily a pattern for defining their configuration. Some prefer full configuration classes where you an edit each and every detail, others prefer a simple LogLevel enum passed in and that’s it.

Since we are using the DebugLogger, we need to change our loggerFactor.AddDebug method a bit.

What this says is please only log to the debug window errors we find. In this case we have hardcoded the level, but in a real application we would likely read an appSetting that was easily changeable between development and production environments.

Using the same HomeController that we used in the first section of this article, let’s run our app again. What do we see in the debug window?

So we see it logging our error message but not our information message or the system messages about pages being loaded. Great!

Again, It’s important to note that each logger may have their own configuration object that they use to describe logging levels. This can be a huge pain when swapping between loggers, but it’s all usually contained within your startup.cs.

Logging Filters

Most (if not all) loggerFactory extensions accept a lambda that allows you to filter out logs you don’t want, or to add extra restrictions on your logging.

I’ll change the AddDebug call to the following :

In this case, “category” stands for our logging category which in real terms is the full name (Namespace + Class) of where the logger is being used. In this case we have locked it down to HomeController, but we could just as easily lock it to “LoggingExample”, the name of this test app. Or “Controllers”, the name of a folder (And in the namespace) in our example app.

Under this example and running our code again, we see that we log everything from the controller (Information & Error), but we log nothing from the system itself (Page requests etc).

What you will quickly find is that this Lambda will get crazy big and hard to manage. Luckily the LoggerFactory also has an extra method to deal with this situation.

Using this, you can specify what level of logging you want for each category. LogLevel.None specifies that you shouldn’t log anything.

Framework Loggers

ASP.net Core has a set of inbuilt loggers that you can make use of.

Writes output to a console. Has serious performance issues in production so only to be used in development.

What we have used above! Writes output to the debugger. Again, a dev only tool.

Only available on Windows, writes events out to an ETL. You generally need to use a third party tool such as PerfView to be able to read the logs.

Writes out logging to the EventLog (Windows only). Again, pretty nifty for logging errors especially when you have a devops team looking after the boxes, they like seeing issues in the Event Log!

If you are on Azure, this writes out logs to Azure diagnostics. This is very handy and if you are in Azure, it’s a must do as other types of logging (Such as to a database), may fail because of network issues which are causing the errors in the first place.

Third Party Loggers

https://github.com/NLog/NLog.Extensions.Logging – NLog is usually my go to for the simple fact that it in turn has many extensions into things like Logentries. Obviously there are other more simple options such as logging to a file or sending an email.

https://github.com/elmahio/Elmah.Io.Extensions.Logging – Elmah is mostly popular because of the dashboard that it comes with. If you are already using Elmah, then this is an easy slot in.

https://github.com/serilog/serilog-extensions-logging – Serilog is gaining in popularity. Again, another easy extension to get going that has a lot of versatility.

Anyone that has used the full .net MVC framework has spent many an hour trying to rejig the web.config and custom MVC filters to get custom error pages going. Often it would lead you on a wild goose chase around Stack Overflow finding answers that went something along the lines of “just do this one super easy thing and it will work”… It never worked.

.net Core has completely re-invented how custom errors work. Partly because with no web.config, any XML configuration is out the window, and partly because the new “middleware pipeline” mostly does away with the plethora of MVC filters you had to use in the past.

Developer Exception Page

The developer exception page is more or less the error page you used to see in full .net framework if you had custom errors off. That is, you could see the stack trace of the error and other important info to help you debug the issue.

By default, new ASP.net core templates come with this turned on when creating a new project. You can check this by looking at the Configure method of your startup.cs file. It should look pretty close to the following.

Note that checking if the environment is development is so important! Just like the CustomErrors tag in the full framework, you only want to leak out your stacktrace and other sensitive info if you are debugging this locally or you specifically need to see what’s going on for testing purposes. Under no circumstances should you just turn on the “UseDeveloperExceptionPage” middleware without first making sure you are working locally (Or some other specific condition).

Another important thing to note is that as always, the ordering of your middleware matters. Ensure that you are adding the Developer Exception Page middleware before you are going into MVC. Without it, MVC (Or any other middleware in your pipeline), can short circuit the process without it ever reaching the Developer Exception page code.

If your code encounters an exception now, you should see something similar to the following :

As we can see, we get the full stack as well as being able to see any query we sent, cookies and other headers. Again, not things we want to start leaking out all over the place.

Exception Handler Page

ASP.net core comes with a catch all middleware that handles all exceptions and will redirect the user to a particular error page. This is pretty similar to the default redirect in the CustomErrors attribute in web.config or the HandleError attribute in full framework MVC. An important note is that this is an “exception” handler. Other status code errors (404 for example) do not get caught and redirected using this middleware.

The pattern is usually to show the developer error page when in the development environment, otherwise to redirect to the error page. So it might look a bit like this :

You would then need to create an action to handle the error. One very important thing to note is that the ExceptionHandler will be called with the same HTTP Verb as the original request. e.g. If the exception happened on a Post request, then your handler at /home/error should be able to accept Posts.

What this means in practice is that you should not decorate your action with any particular HTTP verb, just allow it to accept them all.

Statuscode Pages

ASP.net core comes with an inbuilt middleware that allows you to capture other types of HTTP status codes (Other than say 500), and be able to show them certain content based on the status code.

There are actually two different ways to get this going. The first is :

Using “StatusCodePagesWithRedirects” you redirect the user to the status page. The issue with this is that the client is returned a 302 and not the original status code (For example a 404). Another issue is that if the exception is somewhere in your pipeline you are essentially restarting the pipeline again (And it could throw the same issue up).

The second option is :

Using ReExecute, your original response code is returned but the content of the response is from your specified handler. This means that 404’s will be treated as such by spiders and browsers, but it still allows you to display some custom content (Such as a nice 404 page for a user).

Custom Middleware

Remember that you can always create custom middleware to handle any exception/status code in your pipeline. Here is an example of a very simple middleware to handle a 404 status code.

On our way back out (Remember, the code after the “next” is code to be run on the way out of the pipeline), we check if the status code was 404, if it is then we return a nice little message letting people know we 404’d.

If you want more info on how to create a custom middleware to handle exceptions (Including how to write a nice class to wrap it), check out our tutorial on writing custom middleware in asp.net core.

NancyFX (Or Nancy, FX stands for framework), is a super lightweight web framework that can be used to spin up minimalist API’s in no time at all. It’s mostly seen as an alternative to Web API when you don’t need all the plumbing and boilerplate work of a full Web API project. It also has a lot of popularity with NodeJS developers that are used to very simple DSL’s to build API’s.

Getting Started

When creating a new project, you want to create a .net core web project but make sure it’s created as an “empty” application. If you select Web API (Or MVC) it brings along with it a ton of references and boilerplate code that you won’t need when using Nancy.

You will need the Nancy Nuget package so run the following from your package manager console. At this point in time the .net core able Nancy is in Pre-Release, so at the time of writing, you will need the pre flag when installing from Nuget.

In your startup.cs you need to add a call to UseNancy in the Configure method. Note that any other handler in here (Like AddMVC) may screw up your pipeline, especially when it comes to routing. If you are intending to use Nancy it’s probably better to go all in and not try and mix and match frameworks.

If you have issues with the line “UseOwin” then you will need to install the Owin package.

Instead of creating a class that inherits from “Controller” as you normally would in Web API, create a class that inherits from “NancyModule”. All your work is then actually done in the constructor, which may seem a bit weird at first but when you are building a tiny API it actually becomes nice to keep things to a minimum. Our NancyModule that we will use for the tutorial looks a bit like this

It should be pretty self-explanatory. But let’s go through each one.

  1. When I go to “/” I will see the text “Hello World”.
  2. When I go to “/SayHello?name=wade” I will see “Hello wade”
  3. When I go to “/SayHello2/wade” I will see “Hello wade”

As you can probably see, Nancy is extremely good for simple API’s such as lookup type queries.

Model Binding

While you can stick with using dynamic objects, model binding is always a nice addition. Below is an example of how to model bind on a POST request.

Using Postman I can send a request to /login as a form encoded request, and I can see I get the correct response back :

Nancy also recognizes different body types. If you send the request as JSON (With the appropriate header of “Content Type:application.json”), Nancy will still be able to bind the model.

Before and After Hooks

Before and after hooks are sort of like MVC filters whereby you can intercept the request before and after your Nancy Module has run. Note that this is still part of the .net core middleware pipeline so you can still add custom .net core middleware to your application along side Nancy hooks.

An important thing with hooks is that they are per module. They live within the module and are only run on requests routed to that module, any requests that land on other modules don’t run the before hook. The hook itself tends to point to a method elsewhere, so in the end it works out a bit like a controller attribute.

There are two ways a hook can handle a request. Here’s one way :

If a hook returns null as above, code execution is passed down to the matching route. For example, if you are checking the existence of a particular header (API key or the like), and it’s there, then returning null simply allows execution to continue. Compare this to the following :

In this example, if our header collection doesn’t contain our API key (Weak security I know!), we return a response of “Request Denied” and execution ends right there. If the header is there, we continue execution.

View Engine

While Nancy predominately is used for lightweight API’s, there is an inbuilt view engine and you can even get an addon that allows for Razor-like syntax – In fact it’s pretty much identical to Razor. The actual package can be found here. Unfortunately, at this time the view engine has a dependency on the full .net framework so cannot be used in a web application targeting .net core.

Static Content

Nancy by default won’t serve static content. It does have its own way of dealing with static content and various conventions, but I actually prefer to stick with the standard ASP.net core middleware for static files. This way it short-circuits Nancy completely and avoids going down the Nancy pipeline. For that you can install the following nuget package :

In the Startup.cs class in the Configure method, add a call to UseStaticFiles like so :

Place all your static content in the wwwroot (CSS, Images, Javascript etc), and you will now be able to have these files served outside the Nancy pipeline.

Everything Else

Nancy has it’s own documentation here that you can take a read through. Some of it is pretty verbose, and you also have to remember that not all of it is applicable to .net core. Like the static content section above, there may be an existing solution with .net core that works better out of the box or is easier to configure. As always, drop a comment below to show off your new project you built in Nancy!

Uploading files in ASP.net core is largely the same as standard full framework MVC, with the large exception being how you can now stream large files. We will go over both methods of uploading a file in ASP.net core.

Model Binding IFormFile (Small Files)

When uploading a file via this method, the important thing to note is that your files are uploaded in their entirety before execution hits your controller action. What this means is that the disk on your server holds a temporary file while you decide where to push it. With this small files this is fine, larger files you run into issues of scale. If you have many users all uploading large files you are liable to run out of ram (where the file is stored before moving it to disk), or disk space itself.

For your HTML, it should look something like this :

The biggest thing to note is that the the encoding type is set to “multipart/form-data”, if this is not set then you will go crazy trying to hunt down why your file is showing up in your controller.

Your controller action is actually very simple. It will look something like :

Note that the name of the parameter “files” should match the name on the input in HTML.

Other than that, you are there and done. There is nothing more than you need to do.

Streaming Files (Large Files)

Instead of buffering the file in its entirety, you can stream the file upload. This does introduce challenges as you can no longer use the built in model binding of ASP.NET core. Various tutorials out there show you how to get things working with massive pieces of code, but I’ll give you a helper class that should alleviate most of the work. Most of this work is taken from Microsoft’s tutorial on file uploads here. Unfortunately it’s a bit all over the place with helper classes that you need to dig around the web for.

First, take this helper class and stick it in your project. This code is taken from a Microsoft project here.

Next, you can need this extension class which again is taken from Microsoft code but moved into a helper so that it can be reused and it allows your controllers to be slightly cleaner. It takes an input stream which is where your file will be written to. This is kept as a basic stream because the stream can really come from anywhere. It could be a file on the local server, or a stream to AWS/Azure etc

Now, you actually need to create a custom action attribute that completely disables form binding. This is important otherwise C# will still try and load the contents of the request regardless. The attribute looks like :

Next, your controller should look similar to the following. We create a stream and pass it into the StreamFile extension method. The output is our FormValueProvider which we use to bind out model manually after the file streaming. Remember to put your custom attribute on the action to force the request to not bind.

In my particular example I have created a stream that writes to a temp file, but obviously you can do anything you want with it. The model I am using is the following :

Again, nothing special. It’s just showing you how to bind a view model even when you are streaming files.

My HTML form is pretty standard :

And that’s all there is to it. Now when I upload a file, it is streamed straight to my target stream which could be an upload stream to AWS/Azure or any other cloud provider, and I still managed to get my view model out too. The biggest downside is ofcourse that the viewmodel details are not available until the file has been streamed. What this means is that if there is something in the viewmodel that would normally determine where the file goes, or the name etc, this is not available to you without a bit of rejigging (But it’s definitely doable).


Middleware is the new “pipeline” for requests in asp.net core. Each piece of middleware can process part or all of the request, and then either choose to return the result or pass on down to the next piece of middleware. In the full ASP.net Framework you were able to specify “HTTP Modules” that acted somewhat like a pipeline, but it was hard to really see how the pieces fit together at times.

Anywhere you would normally write an HTTP Module in the full ASP.net Framework is where you should probably now be using middleware. Infact for most places you would normally use MVC filters, you will likely find it easier or more convenient to use middleware.

This diagram from Microsoft does a better job than I could at seeing how pipelines work. Do notice however that you can do work both before and after the pass down to the next middleware. This is important if you wish to affect results as they are going out rather than results that are coming in.

Basic Configuration

The easiest way to get started with middleware is in the Configure method of your startup.cs file. In here is where you “chain” your pipeline together. It may end up looking something like the following :

Now an important thing to remember is that ordering is important. In this pipeline the static files middleware runs first, this middleware can choose to pass on the request (e.g. not process the response in its entirety) or it can choose to push a response to the client and not call the next middleware in the chain. That last part is important because it will often trip you up if your ordering is not correct (e.g. if you want to authenticate/authorize someone before hitting your MVC action).


App.Use is going to be the most common pipeline building block you will come across. It allows you to add on something to the response and then pass to the next middleware in the pipeline, or you can force a short circuit and force a return result without passing to the next handler.

To illustrate, consider the following

In this example, the pipeline adds a response header of X-Content-Type-Options and then passes it to the next handler. The next handler adds a response text of “Hello World!” then the pipeline ends. In this example because we are not calling Next() we actually don’t pass to the next handler at all and execution finishes. In this way we can short circuit the pipeline if something hasn’t met criteria. For example if some authorization key is not present.


You may see App.Run appear around the place, it’s like App.Use’s little brother. App.Run is an “end of the line” middleware. It can handle generating a response, but it doesn’t have the ability to pass it down the chain. For this reason you will see App.Run middleware at the end of the pipeline and no where else.


App.Map is used when you want to build a mini pipeline only for a certain URL. Note that the URL given is used as a “starts with”. So commonly you might see this in middleware for locking down an admin area. The usage is pretty simple :

So in this example when someone goes to /helloworld, they will be given the Hello World! message back.


This is similar to Map, but allows for much more complex conditionals. This is great if you are checking for a cookie or a particular query string, or need more powerful Regex matching on URLs. In the example below, we are checking for a query string of “helloworld”, and if it’s found we return the response of Hello World!

Building A Middleware Class

And after all that we going to toss it in the bin and create a middleware class! Middleware classes use quite a different syntax to all of the above, and really it just groups them all together in a simple pattern.

Let’s say I want to build a class that will add common security headers to my site. If I right click inside my .net core web project and select “Add Item”, there is actually an option to add a middleware class template.

If you can’t find this, don’t fear because there is absolutely nothing about using the template. It just provides the basic structure of how your class should look, but you can copy and paste it from here into a plain old .cs file and it will work fine. Our security middleware looks something like the following :

Very simple. In our invoke method we do all the work we want to do and pass it onto the next handler. Again, we can short circuit this process by returning an empty Task instead of calling next.

Another thing to note is that this is instantiated using the service container. That means if you have a need to access a database or call in settings from an external source, you can simply add the dependency in the constructor.

Important : App middleware is constructed at startup. If you require scoped dependencies or anything other than singletons, do not inject your dependencies into the constructor. Instead you can add dependencies into the Invoke method and .net core will work it out. Again, if you need scoped dependencies add these to your Invoke method not your constructor.

The Extensions part is optional, but it does allow you to write code like this :


In a previous post, we talked about how to use a Redis Cache in .net Core. In most large scale scenarios, Redis is going to be your goto. But for tiny sites that have a single web instance, or for sites that really only need a local cache, InMemory caching is much easier to get setup with and obviously does away with wrangling a Redis server.

Interestingly, .net Core currently offers two ways to implement a local in memory cache. We’ll take a look at both.


The first option is to use what is simply known in .net core as IMemoryCache. It’s similar to what you may have used in standard ASP.net in terms of storing an object in memory by a key.

First open up your startup.cs. In your ConfigureServices method you need to add a call to “AddMemoryCache” like so :

In your controller or class you wish to use the memory cache, add in a dependency into the constructor. The two main methods you will likely be interested in are “TryGetValue” and “Set”. Both should be rather explanatory in the following code :

That’s the basics, now a couple of nice things that this implementation of a memory cache has that won’t be available in the next implementation I will show you

An interesting feature is the PostEvictionCallback delegate. This allows you to register an action to be called everytime something “expires”. To use it, it will look something like the following :

Now everytime a cache entry expires, you will be notified about it. Usually there are very limited reasons why you would want to do this, but the option is there should you want it!

Also supported is CancellationTokens. Again, a feature that probably won’t be used too often, but can be used to invalidate a set of cache all in one go. CancellationTokens are notoriously difficult to debug and get going, but if you have the need it’s there!

The code would look something similar to this :

Distributed Memory Cache

A “distributed” memory cache is probably a bit of an oxymoron. It’s obviously not distributed if it’s sitting local to a machine. But the big advantage to going down this road is that should to intend to switch to using Redis in the future, the interfaces between the RedisDistributedCache and the In Memory one are exactly the same. It’s just a single line of difference in your startup. This may also be helpful if locally you just want to use your machines cache and not have Redis setup.

To get going, in your startup.cs ConfigureServices method, add a call to AddDistributedMemoryCache like so :

Your controller or class where you inject it should look a bit like the following. Again, this code is actually taken directly from our Redis Cache tutorial, the implementation is exactly the same for InMemory, it’s only the call in ConfigureServices in your startup.cs that changes.

Now, looking at what we said about IMemoryCache above, PostEvictionCallback and CancellationTokens cannot be used here. This makes sense because this interface for the most part is supposed to be used with distributed environments, any machine in the environment (Or the cache itself) could expire/remove a cache entry.

Another very important difference is that while IMemoryCache accepts C# “objects” into the cache, a distributed cache does not. A distributed cache can only accept byte arrays or strings. For the most part this isn’t going to be a big deal. If you are trying to store large objects just run them through a JSON serializer first and store them as a string, when you pull them out deserialize them into your object.

Which Should You Pick?

You are likely going to write an abstraction layer on top of either caching interface meaning that your controllers/services aren’t going to see much of it. For what you should use in that abstraction, I tend to go with the Distributed Cache for no other reason that should I ever want to move to using Redis, I have the option there.

There are two types of people when it comes to Database Migrations. Those that want a completely automated process in their CI pipeline and have very strong opinions on what tool to use it, and then there are those who couldn’t care less and just want to run things by hand (Seriously, those still exist).

This article will try and focus on the former, on what tools best fit into the .net core ecosystem. I think most of it isn’t specific to .net core, but there are a few extra considerations to take into account.

First let’s take a look at what I think are the 4 main principles of a database migration suite (Or any step in a CI pipeline).


You should be able to standup a database from scratch and have it function exactly as the existing production database does. This may include pre-seeding data (such as admin users) right from the get go. Repeatable also means that if for some reason the migration runs again, it should be “aware” of what it’s already done and not blow up trying to run the same scripts over and over.


As much as possible, a good database migration strategy should be automated. When trying to reduce the element of human error, you should just remove the humans all together.


There are actually two parts to this. Scalable means that as the database gets bigger, your migration doesn’t start falling over. This also means that the tooling you might be using to generate diffs or actually run the migrations doesn’t start dying too. But scalable also has a second meaning, and that is scalable with your team. That means as your team reaches large proportions on a single project (Say close to a dozen developers), does managing migrations and potential code conflicts get out of control.


For database migrations, flexible is all about is the tooling good enough to handle all sorts of database changes, not just a schema change. If you ever split a field, you will have to migrate that data somehow. Other times you may want to mass update data to fix a previous bug, and you want this to be automated along with your database rollout, not run manually and forgotten about.

Database Project/SQL Project in Visual Studio

I remember the first few times I used a database project to update a database schema, it felt like magical. Now it feels like it has it’s place for showing a database state, but it’s migration ability is severely lacking. In terms of how a dbproj actually processes an update, it compares the existing database schema to the the desired state, and then generates scripts to get it there and runs them. But let’s think about that for a second, there may be many “steps” that we want to do to get us to the desired schema, not a huge leap forward. It’s less of a step by step migration and more of a blunt tool to get you to the end result the fastest.

Because of the way a dbproj processes migrations, it also makes updating actual data very hard to do. You may have a multi step process in which you want to split a column slowly by creating a temporary field, filling the data into there, and then doing a slow migration through code. It doesn’t really work because a dbproj is more of a “desired state” migration.

That being said, database projects are very good at seeing how a database has evolved over time. Because your database objects become “code” in your source control, you can see how over columns/views/stored procedures have been added/remove, and with the associated check in comment too.

It should also be noted that DB Projects are not multi platform (Both in OS and database). If you are building your existing .net core project on Linux then database projects are out. And if you are running anything but Microsoft SQL Server, database projects are also out.

Repeatable : No (Migration from A -> B -> C will always work the same, but going straight from A -> C may give you weird side effects in data management).
Automated : Yes
Scalable : Yes
Flexible : No – Data migrations are very hard to pull off in a repeatable way

Entity Framework Migrations

EF Migrations are usually the go to when you are using Entity Framework as your data layer. They are a true migration tool that can be started from any “state” and run in order to bring you to the desired state. Unlike a dbproj, they always run in order so you are never “skipping” data migrations by going from state A -> C. It will still run A -> B -> C in order.

EF Migrations are created using their own fluent API in C# code. For some this feels natural, but others feel limited in what they can achieve trying to control a complex database with a subset of SQL commands that have been converted to the DSL. If you think about complex covering indexes where you have multiple columns along with include columns etc, there is probably some very complex way to do it via C# code, but for most people it becomes a hassle. Add to the fact that ORM’s in general “hide” what queries they are actually running, so now you are also hiding how your database is actually tuned and created,  it does make some people squirm.

But the biggest issue with EF Migrations is the actual migration running itself. Entity Framework is actually “aware” of the state of the database and really throws its toys out of its cot if things aren’t up to date. It knows what migrations “should” have run, so if there are pending migrations, EF really does have a hissy fit. When you are running blue/green deployments, or you have a rolling set of web servers that need to be compatible with your old and new database schema, EF Migrations simply do not work.

Repeatable : Yes
Automated : Yes
Scalable : No
Flexible : Sort Of – You can migrate data and even write custom SQL in a EF Migration, but for the most part you are limited to the EF fluent API


Full disclosure, I love DB Up. And I love it even more now that it has announced .net core support is coming (Or already here depending on when you read this).

DB Up is a migration framework that uses a collection of SQL files, and runs them in order from start to finish. From any existing state to any desired state. Because they are just plain old SQL files, every SQL command is available to you, making DB Up easily the most powerful migration tool in your arsenal for dealing with extremely complex databases. Data migrations also become less of a pain because you simply have every single SQL tool available.

In a CI scenario, DB Up is able to build a database from scratch to any point in time, meaning that testing your “new” C# code on your “old” database is now a breeze. I can’t tell you how many times this has saved my ass in a large team environment. People are forever removing columns in a SQL database before removing them from code, causing the old code to error out and crash. In large web deployment scenarios where there are cross over periods with a new database schema being in play, but old web code running on top of it, it’s a god send.

The other great thing about DB Up is that it’s database agnostic (To a degree). It supports Postgres, MYSQL, Firebird, SQL Azure and of course SQL Server. This is great news if you are running your .net core code on something like Postgres.

But, there is a big flaw when using something with DB Up. It becomes hard to see how a particular table has changed over time. Using DB Up all you have is a collection of scripts that when run in order, give you the desired result, but using source control it becomes difficult to see how something has evolved. In terms of migrations, this isn’t a big deal. But in terms of being able to see an overall picture of your database, not so great.

Repeatable : Yes
Automated : Yes
Scalable : Yes

Hybrid Approach

I tend to take a hybrid approach for most projects. DB Up is far and away the best database migration tool, but it is limited in seeing the overall state of the database. That’s where a database project comes in. It’s great for seeing the overall picture, and you can check back through source control to see how things have changed over time. But it’s migration facilities are severely limited. With that, it becomes a great fit to use both.

And You?

How do you do your migrations? Do you use one of the above? Or maybe something like Fluent Migrator? Drop a comment below!