NancyFX (Or Nancy, FX stands for framework), is a super lightweight web framework that can be used to spin up minimalist API’s in no time at all. It’s mostly seen as an alternative to Web API when you don’t need all the plumbing and boilerplate work of a full Web API project. It also has a lot of popularity with NodeJS developers that are used to very simple DSL’s to build API’s.

Getting Started

When creating a new project, you want to create a .net core web project but make sure it’s created as an “empty” application. If you select Web API (Or MVC) it brings along with it a ton of references and boilerplate code that you won’t need when using Nancy.

You will need the Nancy Nuget package so run the following from your package manager console. At this point in time the .net core able Nancy is in Pre-Release, so at the time of writing, you will need the pre flag when installing from Nuget.

In your startup.cs you need to add a call to UseNancy in the Configure method. Note that any other handler in here (Like AddMVC) may screw up your pipeline, especially when it comes to routing. If you are intending to use Nancy it’s probably better to go all in and not try and mix and match frameworks.

If you have issues with the line “UseOwin” then you will need to install the Owin package.

Instead of creating a class that inherits from “Controller” as you normally would in Web API, create a class that inherits from “NancyModule”. All your work is then actually done in the constructor, which may seem a bit weird at first but when you are building a tiny API it actually becomes nice to keep things to a minimum. Our NancyModule that we will use for the tutorial looks a bit like this

It should be pretty self-explanatory. But let’s go through each one.

  1. When I go to “/” I will see the text “Hello World”.
  2. When I go to “/SayHello?name=wade” I will see “Hello wade”
  3. When I go to “/SayHello2/wade” I will see “Hello wade”

As you can probably see, Nancy is extremely good for simple API’s such as lookup type queries.

Model Binding

While you can stick with using dynamic objects, model binding is always a nice addition. Below is an example of how to model bind on a POST request.

Using Postman I can send a request to /login as a form encoded request, and I can see I get the correct response back :

Nancy also recognizes different body types. If you send the request as JSON (With the appropriate header of “Content Type:application.json”), Nancy will still be able to bind the model.

Before and After Hooks

Before and after hooks are sort of like MVC filters whereby you can intercept the request before and after your Nancy Module has run. Note that this is still part of the .net core middleware pipeline so you can still add custom .net core middleware to your application along side Nancy hooks.

An important thing with hooks is that they are per module. They live within the module and are only run on requests routed to that module, any requests that land on other modules don’t run the before hook. The hook itself tends to point to a method elsewhere, so in the end it works out a bit like a controller attribute.

There are two ways a hook can handle a request. Here’s one way :

If a hook returns null as above, code execution is passed down to the matching route. For example, if you are checking the existence of a particular header (API key or the like), and it’s there, then returning null simply allows execution to continue. Compare this to the following :

In this example, if our header collection doesn’t contain our API key (Weak security I know!), we return a response of “Request Denied” and execution ends right there. If the header is there, we continue execution.

View Engine

While Nancy predominately is used for lightweight API’s, there is an inbuilt view engine and you can even get an addon that allows for Razor-like syntax – In fact it’s pretty much identical to Razor. The actual package can be found here. Unfortunately, at this time the view engine has a dependency on the full .net framework so cannot be used in a web application targeting .net core.

Static Content

Nancy by default won’t serve static content. It does have its own way of dealing with static content and various conventions, but I actually prefer to stick with the standard ASP.net core middleware for static files. This way it short-circuits Nancy completely and avoids going down the Nancy pipeline. For that you can install the following nuget package :

In the Startup.cs class in the Configure method, add a call to UseStaticFiles like so :

Place all your static content in the wwwroot (CSS, Images, Javascript etc), and you will now be able to have these files served outside the Nancy pipeline.

Everything Else

Nancy has it’s own documentation here that you can take a read through. Some of it is pretty verbose, and you also have to remember that not all of it is applicable to .net core. Like the static content section above, there may be an existing solution with .net core that works better out of the box or is easier to configure. As always, drop a comment below to show off your new project you built in Nancy!

Uploading files in ASP.net core is largely the same as standard full framework MVC, with the large exception being how you can now stream large files. We will go over both methods of uploading a file in ASP.net core.

Model Binding IFormFile (Small Files)

When uploading a file via this method, the important thing to note is that your files are uploaded in their entirety before execution hits your controller action. What this means is that the disk on your server holds a temporary file while you decide where to push it. With this small files this is fine, larger files you run into issues of scale. If you have many users all uploading large files you are liable to run out of ram (where the file is stored before moving it to disk), or disk space itself.

For your HTML, it should look something like this :

The biggest thing to note is that the the encoding type is set to “multipart/form-data”, if this is not set then you will go crazy trying to hunt down why your file is showing up in your controller.

Your controller action is actually very simple. It will look something like :

Note that the name of the parameter “files” should match the name on the input in HTML.

Other than that, you are there and done. There is nothing more than you need to do.

Streaming Files (Large Files)

Instead of buffering the file in its entirety, you can stream the file upload. This does introduce challenges as you can no longer use the built in model binding of ASP.net core. Various tutorials out there show you how to get things working with massive pieces of code, but I’ll give you a helper class that should alleviate most of the work. Most of this work is taken from Microsoft’s tutorial on file uploads here. Unfortunately it’s a bit all over the place with helper classes that you need to dig around the web for.

First, take this helper class and stick it in your project. This code is taken from a Microsoft project here.

Next, you can need this extension class which again is taken from Microsoft code but moved into a helper so that it can be reused and it allows your controllers to be slightly cleaner. It takes an input stream which is where your file will be written to. This is kept as a basic stream because the stream can really come from anywhere. It could be a file on the local server, or a stream to AWS/Azure etc

Now, you actually need to create a custom action attribute that completely disables form binding. This is important otherwise C# will still try and load the contents of the request regardless. The attribute looks like :

Next, your controller should look similar to the following. We create a stream and pass it into the StreamFile extension method. The output is our FormValueProvider which we use to bind out model manually after the file streaming. Remember to put your custom attribute on the action to force the request to not bind.

In my particular example I have created a stream that writes to a temp file, but obviously you can do anything you want with it. The model I am using is the following :

Again, nothing special. It’s just showing you how to bind a view model even when you are streaming files.

My HTML form is pretty standard :

And that’s all there is to it. Now when I upload a file, it is streamed straight to my target stream which could be an upload stream to AWS/Azure or any other cloud provider, and I still managed to get my view model out too. The biggest downside is ofcourse that the viewmodel details are not available until the file has been streamed. What this means is that if there is something in the viewmodel that would normally determine where the file goes, or the name etc, this is not available to you without a bit of rejigging (But it’s definitely doable).

 

Middleware is the new “pipeline” for requests in asp.net core. Each piece of middleware can process part or all of the request, and then either choose to return the result or pass on down to the next piece of middleware. In the full ASP.net Framework you were able to specify “HTTP Modules” that acted somewhat like a pipeline, but it was hard to really see how the pieces fit together at times.

Anywhere you would normally write an HTTP Module in the full ASP.net Framework is where you should probably now be using middleware. Infact for most places you would normally use MVC filters, you will likely find it easier or more convenient to use middleware.

This diagram from Microsoft does a better job than I could at seeing how pipelines work. Do notice however that you can do work both before and after the pass down to the next middleware. This is important if you wish to affect results as they are going out rather than results that are coming in.

Basic Configuration

The easiest way to get started with middleware is in the Configure method of your startup.cs file. In here is where you “chain” your pipeline together. It may end up looking something like the following :

Now an important thing to remember is that ordering is important. In this pipeline the static files middleware runs first, this middleware can choose to pass on the request (e.g. not process the response in its entirety) or it can choose to push a response to the client and not call the next middleware in the chain. That last part is important because it will often trip you up if your ordering is not correct (e.g. if you with to authenticate/authorize someone before hitting your MVC action).

App.Use

App.Use is going to be the most common pipeline building block you will come across. It allows you to add on something to the response and then pass to the next middleware in the pipeline, or you can force a short circuit and force a return result without passing to the next handler.

To illustrate, consider the following

In this example, the pipeline adds a response header of X-Content-Type-Options and then passes it to the next handler. The next handler adds a response text of “Hello World!” then the pipeline ends. In this example because we are not calling Next() we actually don’t pass to the next handler at all and execution finishes. In this way we can short circuit the pipeline if something hasn’t met criteria. For example if some authorization key is not present.

App.Run

You may see App.Run appear around the place, it’s like App.Use’s little brother. App.Run is an “end of the line” middleware. It can handle generating a response, but it doesn’t have the ability to pass it down the chain. For this reason you will see App.Run middleware at the end of the pipeline and no where else.

App.Map

App.Map is used when you want to build a mini pipeline only for a certain URL. Note that the URL given is used as a “starts with”. So commonly you might see this in middleware for locking down an admin area. The usage is pretty simple :

So in this example when someone goes to /helloworld, they will be given the Hello World! message back.

App.MapWhen

This is similar to Map, but allows for much more complex conditionals. This is great if you are checking for a cookie or a particular query string, or need more powerful Regex matching on URLs. In the example below, we are checking for a query string of “helloworld”, and if it’s found we return the response of Hello World!

Building A Middleware Class

And after all that we going to toss it in the bin and create a middleware class! Middleware classes use quite a different syntax to all of the above, and really it just groups them all together in a simple pattern.

Let’s say I want to build a class that will add common security headers to my site. If I right click inside my .net core web project and select “Add Item”, there is actually an option to add a middleware class template.

If you can’t find this, don’t fear because there is absolutely nothing about using the template. It just provides the basic structure of how your class should look, but you can copy and paste it from here into a plain old .cs file and it will work fine. Our security middleware looks something like the following :

Very simple. In our invoke method we do all the work we want to do and pass it onto the next handler. Again, we can short circuit this process by returning an empty Task instead of calling next.

Another thing to note is that this is instantiated using the service container. That means if you have a need to access a database or call in settings from an external source, you can simply add the dependency in the constructor.

Important : App middleware is constructed at startup. If you require scoped dependencies or anything other than singletons, do not inject your dependencies into the instructure. Instead the you can add dependencies into the Invoke method and .net core will work it out. Again, if you need scoped dependencies add these to your Invoke method not your constructor.

The Extensions part is optional, but it does allow you to write code like this :

 

In a previous post, we talked about how to use a Redis Cache in .net Core. In most large scale scenarios, Redis is going to be your goto. But for tiny sites that have a single web instance, or for sites that really only need a local cache, InMemory caching is much easier to get setup with and obviously does away with wrangling a Redis server.

Interestingly, .net Core currently offers two ways to implement a local in memory cache. We’ll take a look at both.

IMemoryCache

The first option is to use what is simply known in .net core as IMemoryCache. It’s similar to what you may have used in standard ASP.net in terms of storing an object in memory by a key.

First open up your startup.cs. In your ConfigureServices method you need to add a call to “AddMemoryCache” like so :

In your controller or class you wish to use the memory cache, add in a dependency into the constructor. The two main methods you will likely be interested in are “TryGetValue” and “Set”. Both should be rather explanatory in the following code :

That’s the basics, now a couple of nice things that this implementation of a memory cache has that won’t be available in the next implementation I will show you

PostEvictionCallback
An interesting feature is the PostEvictionCallback delegate. This allows you to register an action to be called everytime something “expires”. To use it, it will look something like the following :

Now everytime a cache entry expires, you will be notified about it. Usually there are very limited reasons why you would want to do this, but the option is there should you want it!

CancellationToken
Also supported is CancellationTokens. Again, a feature that probably won’t be used too often, but can be used to invalidate a set of cache all in one go. CancellationTokens are notoriously difficult to debug and get going, but if you have the need it’s there!

The code would look something similar to this :

Distributed Memory Cache

A “distributed” memory cache is probably a bit of an oxymoron. It’s obviously not distributed if it’s sitting local to a machine. But the big advantage to going down this road is that should to intend to switch to using Redis in the future, the interfaces between the RedisDistributedCache and the In Memory one are exactly the same. It’s just a single line of difference in your startup. This may also be helpful if locally you just want to use your machines cache and not have Redis setup.

To get going, in your startup.cs ConfigureServices method, add a call to AddDistributedMemoryCache like so :

Your controller or class where you inject it should look a bit like the following. Again, this code is actually taken directly from our Redis Cache tutorial, the implementation is exactly the same for InMemory, it’s only the call in ConfigureServices in your startup.cs that changes.

Now, looking at what we said about IMemoryCache above, PostEvictionCallback and CancellationTokens cannot be used here. This makes sense because this interface for the most part is supposed to be used with distributed environments, any machine in the environment (Or the cache itself) could expire/remove a cache entry.

Another very important difference is that while IMemoryCache accepts C# “objects” into the cache, a distributed cache does not. A distributed cache can only accept byte arrays or strings. For the most part this isn’t going to be a big deal. If you are trying to store large objects just run them through a JSON serializer first and store them as a string, when you pull them out deserialize them into your object.

Which Should You Pick?

You are likely going to write an abstraction layer on top of either caching interface meaning that your controllers/services aren’t going to see much of it. For what you should use in that abstraction, I tend to go with the Distributed Cache for no other reason that should I ever want to move to using Redis, I have the option there.

There are two types of people when it comes to Database Migrations. Those that want a completely automated process in their CI pipeline and have very strong opinions on what tool to use it, and then there are those who couldn’t care less and just want to run things by hand (Seriously, those still exist).

This article will try and focus on the former, on what tools best fit into the .net core ecosystem. I think most of it isn’t specific to .net core, but there are a few extra considerations to take into account.

First let’s take a look at what I think are the 4 main principles of a database migration suite (Or any step in a CI pipeline).

Repeatable

You should be able to standup a database from scratch and have it function exactly as the existing production database does. This may include pre-seeding data (such as admin users) right from the get go. Repeatable also means that if for some reason the migration runs again, it should be “aware” of what it’s already done and not blow up trying to run the same scripts over and over.

Automated

As much as possible, a good database migration strategy should be automated. When trying to reduce the element of human error, you should just remove the humans all together.

Scalable

There are actually two parts to this. Scalable means that as the database gets bigger, your migration doesn’t start falling over. This also means that the tooling you might be using to generate diffs or actually run the migrations doesn’t start dying too. But scalable also has a second meaning, and that is scalable with your team. That means as your team reaches large proportions on a single project (Say close to a dozen developers), does managing migrations and potential code conflicts get out of control.

Flexible

For database migrations, flexible is all about is the tooling good enough to handle all sorts of database changes, not just a schema change. If you ever split a field, you will have to migrate that data somehow. Other times you may want to mass update data to fix a previous bug, and you want this to be automated along with your database rollout, not run manually and forgotten about.

Database Project/SQL Project in Visual Studio

I remember the first few times I used a database project to update a database schema, it felt like magical. Now it feels like it has it’s place for showing a database state, but it’s migration ability is severely lacking. In terms of how a dbproj actually processes an update, it compares the existing database schema to the the desired state, and then generates scripts to get it there and runs them. But let’s think about that for a second, there may be many “steps” that we want to do to get us to the desired schema, not a huge leap forward. It’s less of a step by step migration and more of a blunt tool to get you to the end result the fastest.

Because of the way a dbproj processes migrations, it also makes updating actual data very hard to do. You may have a multi step process in which you want to split a column slowly by creating a temporary field, filling the data into there, and then doing a slow migration through code. It doesn’t really work because a dbproj is more of a “desired state” migration.

That being said, database projects are very good at seeing how a database has evolved over time. Because your database objects become “code” in your source control, you can see how over columns/views/stored procedures have been added/remove, and with the associated check in comment too.

It should also be noted that DB Projects are not multi platform (Both in OS and database). If you are building your existing .net core project on Linux then database projects are out. And if you are running anything but Microsoft SQL Server, database projects are also out.

Repeatable : No (Migration from A -> B -> C will always work the same, but going straight from A -> C may give you weird side effects in data management).
Automated : Yes
Scalable : Yes
Flexible : No – Data migrations are very hard to pull off in a repeatable way

Entity Framework Migrations

EF Migrations are usually the go to when you are using Entity Framework as your data layer. They are a true migration tool that can be started from any “state” and run in order to bring you to the desired state. Unlike a dbproj, they always run in order so you are never “skipping” data migrations by going from state A -> C. It will still run A -> B -> C in order.

EF Migrations are created using their own fluent API in C# code. For some this feels natural, but others feel limited in what they can achieve trying to control a complex database with a subset of SQL commands that have been converted to the DSL. If you think about complex covering indexes where you have multiple columns along with include columns etc, there is probably some very complex way to do it via C# code, but for most people it becomes a hassle. Add to the fact that ORM’s in general “hide” what queries they are actually running, so now you are also hiding how your database is actually tuned and created,  it does make some people squirm.

But the biggest issue with EF Migrations is the actual migration running itself. Entity Framework is actually “aware” of the state of the database and really throws its toys out of its cot if things aren’t up to date. It knows what migrations “should” have run, so if there are pending migrations, EF really does have a hissy fit. When you are running blue/green deployments, or you have a rolling set of web servers that need to be compatible with your old and new database schema, EF Migrations simply do not work.

Repeatable : Yes
Automated : Yes
Scalable : No
Flexible : Sort Of – You can migrate data and even write custom SQL in a EF Migration, but for the most part you are limited to the EF fluent API

DB Up

Full disclosure, I love DB Up. And I love it even more now that it has announced .net core support is coming (Or already here depending on when you read this).

DB Up is a migration framework that uses a collection of SQL files, and runs them in order from start to finish. From any existing state to any desired state. Because they are just plain old SQL files, every SQL command is available to you, making DB Up easily the most powerful migration tool in your arsenal for dealing with extremely complex databases. Data migrations also become less of a pain because you simply have every single SQL tool available.

In a CI scenario, DB Up is able to build a database from scratch to any point in time, meaning that testing your “new” C# code on your “old” database is now a breeze. I can’t tell you how many times this has saved my ass in a large team environment. People are forever removing columns in a SQL database before removing them from code, causing the old code to error out and crash. In large web deployment scenarios where there are cross over periods with a new database schema being in play, but old web code running on top of it, it’s a god send.

The other great thing about DB Up is that it’s database agnostic (To a degree). It supports Postgres, MYSQL, Firebird, SQL Azure and of course SQL Server. This is great news if you are running your .net core code on something like Postgres.

But, there is a big flaw when using something with DB Up. It becomes hard to see how a particular table has changed over time. Using DB Up all you have is a collection of scripts that when run in order, give you the desired result, but using source control it becomes difficult to see how something has evolved. In terms of migrations, this isn’t a big deal. But in terms of being able to see an overall picture of your database, not so great.

Repeatable : Yes
Automated : Yes
Scalable : Yes
FlexibleYes

Hybrid Approach

I tend to take a hybrid approach for most projects. DB Up is far and away the best database migration tool, but it is limited in seeing the overall state of the database. That’s where a database project comes in. It’s great for seeing the overall picture, and you can check back through source control to see how things have changed over time. But it’s migration facilities are severely limited. With that, it becomes a great fit to use both.

And You?

How do you do your migrations? Do you use one of the above? Or maybe something like Fluent Migrator? Drop a comment below!

With .net core comes a new way to build and run unit tests with a command line tool named “dotnet test”. While the overall syntax of writing tests using MSTest, XUnit or NUnit hasn’t changed, the tooling has changed substantially from what people are used to. There is even a couple of gotchas that are enough to cause migraines if you aren’t careful.

Annoyingly, many of the packages you need to get started are in pre-release, and much of the documentation is locked into the old style DNX tooling. This article focuses on .net Core 1.1 – Preview 2. If you are already running Preview 4 then much of this will still apply, but you will be working with csproj files instead of project.json. Once Preview 4 gets a proper release, this article will be updated to reflect the (likely) many changes that go along with it.

Let’s get started.

The Basics Using MSTest

We will go over the basics using the MSTest framework. If you prefer to use NUnit or XUnit there are sections below to explain how they run within dotnet test, but still read through this section as it will give you the general gist of how things work.

Create a .net core console project in your solution. Now you are probably used to creating a class library for unit tests or maybe even a “unit test” project. We do not use the “Unit Test” project type as there is no such type for .net core, only the full framework. And we do not use a class library only for the reason that it defaults the target framework to .net standard, not .net core. It’s not that we are going to be building a console application per-say, it’s that the defaults for a console application in terms of project.json are closer to what we need.

Once the project is created, if you are on .net core 1.1 or lower, open up project.json. Delete the entire “buildOptions” node. That’s this node :

You can also go ahead and delete the Program.cs file as we won’t be needing it.

Inside that project, you need to install a couple of packages.

You first need to install the MSTest Framework package from here. At this point in time it is still in Pre-Release so you need the “Pre” flag (At some point this will change).

And now install the actual runner package from here. Again, it’s in Pre-Release so you will need the “Pre” flag.

Now go back to your project.json and add in a “testrunner” node at the top level called “mstest”. All in all, your project.json should look like the following :

Great! Now, for the purpose of this article we will create a simple class called “UnitTests.cs”, and add in a couple of tests. That file looks like the following :

Open a command prompt in your unit test directory. Run “dotnet test”. You should see something like the following :

And you are done! You now have a unit test project within .net core.

Using NUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in your NUnit nuget package by running the following from the package manager console :

Now you need to add the NUnit test runner. Install the following nuget package. Note that at the time of this article, the package is only in pre-release so you need the Pre flag :

Inside your project.json, you need a top level node called “testRunner” with the value of “nunit”. If you’ve followed the steps carefully, you should have a project.json that resembles the following :

For the purpose of this article, I have the following test class :

Open a command prompt inside your project folder and run “dotnet test”. You should see something similar to the following :

Using XUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in XUnit Core nuget package to your project. In your package manager console run :

Now add in the XUnit test runner with the following command  in your package manager :

Open up your project.json. Add in a top level node named “testrunner” with the value of “xunit”. All going well, your project.json should resemble something close to the following :

For this article, I have the following test class :

Open up a command prompt in your project folder. Run the command “dotnet test” and you should see something similar to the following :

GZIP is a generic compression method that can be applied to any stream of bits. In terms of how it’s used on the web, it’s easiest thought of a way that makes your files “smaller”. When applying GZIP compression to text files, you can see anywhere from 70-90% savings, on images or other files that are usually already “compressed”, savings are much smaller or in some cases nothing. Other than eliminating unnecessary resource downloads, enabling compression is the next best way to improve the loading speed of your web app.

Enabling At The Code Level (Dynamic)

You can dynamically GZIP text resources on request. This is great for content that may change (Think dynamic html, text, not JS, CSS). The reason it’s not great for static content such as CSS or JS, is that those files will not change between requests, or really even between deployments. There is no advantage to dynamically compressing these each time they are requested.

The code you need to make this work is contained in a nuget package. So you will need to run the following from your package manager console.

In your ConfigureServices method in your startup.cs, you need to add a single line to add the dependencies that GZIP compression is going to need.

Note that there are a couple of options you may wish to use in this call.

The EnableForHttps flag does exactly what it says. It enables GZip compression even over SSL (By default this is switched off). The second turns on (Or off in some cases) GZip of certain mimetypes. The default list at the time of writing is the following (Taken from the Github source here) :

Then in your configure method of startup.cs, you then just need a single line to get going.

Note that the order is very important! While in this case we only have two pieces of middleware that don’t cause issues, other middleware (such as the Static Content middleware), actually send the response back to the user before GZip has occured if it’s first in the list. For safety sake, just keep UseResponseCompression as the first middleware in your list.

To test you have everything up and running just fine, try loading your site and watching the requests go past. You should see the Content-Encoding header come back as “Gzip”. In Chrome it looks like the following :

Enabling At The Code Level (Static)

The issue with enabling GZip compression inside .net core is that if your content is largely static (As in it doesn’t change), then it’s being re-compressed over and over again. When it comes to javascript and CSS, there is really no reason for this to be done on the fly. Instead we can GZip these files beforehand and deploy them “pre” GZip’d.

For this you will need to use a task runner such as Gulp to GZIP your static files in your pipeline. There is a very simple gulp package for this named “gulp-gzip”. Have a search around and get used to how it works. But your gulp file should look something similar to the following :

This is a very simple case, you will need to know a bit more about Gulp to customize it to your needs.

However importantly, you need to map Gzip files to actually return their real result. First install the static file middleware. Run the following from your Package Manager console.

Now in your Configure method in startup.cs, you need to add some code like so :

What does this do? Well first it says that serving static files without running the MVC gamut is A-OK. But next it says, if you come across a file that ends in .js.gz, you can send it out, but instead tell the browser that it’s actually javascript and that it’s just been encoded with gzip. Otherwise it just returns the file as a plain old GZip file that a user could download for example.

Next on your webpage, you actually need to be referencing the .gz file not the .js file. So it should look something like so :

And that’s it, your static content is now compressed!

Enabling At The Server Level

And lastly, you can of course always enable GZip compression at the web server level (IIS, NGinx, Apache etc). For this it’s better to consult the relevant documentation. You may wish to do this as it keeps the configuration out of code (And allows a bit more easier access for configuration on the fly).

One of the first things people notice when making the jump from the full .net Framework to .net Core is the inbuilt dependency injection. Having a DI framework right there ready to be used means less time trying to set up a boilerplate project.

Interestingly, a pattern that emerged from this was one whereby an extension of the service collection was used to add services. For example, you might see something like this :

But what is actually inside the “AddMvc” call? Jumping into the source code (And going a few more levels deeper) it actually looks like this :

So under the hood it’s just doing plain old additions to the service collection, but it’s moved away into a nice tidy “add” call. The pattern might have been around for a while but it was certainly the first time I had seen it. And it’s certainly easy on the eye when looking into your startup.cs. Instead of 100’s of lines of adding services (Or private method calls), there is just a couple of extensions being used.

Building Your Own Extension

Building your own extension is simple. A glimpse into the project structure I’ll use for this demo is here :

All pretty standard. I have a Services folder with MyService and it’s interface inside it. Inside there we have a file called “ServicesConfiguration”. The contents of that file looks like the following :

So this has a single method that is marked as static (important!). The method accepts a param of IServiceCollection as “this”, also very important. Inside here, we simply bind variables how we normally would any other way.

Now if we go to our startup.cs, We can add a call to this method like so :

Obviously as we add more services, our startup.cs doesn’t change it stays nice and clean. We simply add more to our ServicesConfiguration.

…But Is It A Good Pattern?

There are a few caveats I feel when using this pattern. And both are to do with building libraries or sharing code.

You are locked into the ServiceCollection DI

This isn’t necessarily true since another developer could simply choose not to call the “AddXYZServices” method, but it’s likely going to cause a lack of documentation around how things work. I would guess that most documentation would boil down to “Now just add services.AddXYZServices() and you are done” rather than actually go into detail about the interfaces and classes that are required.

IServiceCollection is not .net Standard

IServiceCollection is not part of the .net standard. Therefore if you wish to create a library that can be used cross .net platform (e.g. in Xamarin, .net Framework, Mono etc) you cannot use these sorts of extension methods.

Clickjacking, XSS and CSRF, exploits that have been around for 15+ years now and still form the basis for many vulnerabilities on the web today. If you spend any time around bug bounty programs you will notice similar patterns with these exploits, that many could have been prevented with just a few HTTP Headers in place. A website that goes into production without these is asking for an exploit. Even if they have “other mitigations” in place, on most platforms it takes all of 5 minutes to add in a custom header which could save your site from an unfortunate vulnerability.

These headers should be on any “go live” checklist unless there is a specific reason why they can’t be (For example it’s an internal site that is going to be framed elsewhere, you may not want to disallow frames). While this site focuses on .net core, these headers are language and web server agnostic and can be used on any site.

X-Frame-Options

X-Frame-Options tells the browser that if your site is placed inside a HTML frame, to not render anything. This is super important when trying to protect yourself against click jacking attempts.

In the early days of the web, Javascript frame breakers were all the rage to stop people from framing your site. Ever seen this sort of code?

It was almost a staple in the early 2000’s. But like most things Javascript, people found ways to get around it and invented a frame breaker breaker. Other issues included human error aka ensuring that the frame breaker code was actually on every page of your site. That one page you forgot to include the javascript snippet on? Yeah that’s the exact page someone used for clickjacking. Happens all the time!

A http header that disallows frames server wide with full browser support should be your go to even if you want to use a javascript framebreaker side by side.

X-Frame-Options is most commonly set to be “SameOrigin”, that is that the website can frame itself. But unless you actually have a need for this, the option of “Deny” is your best bet.

A guide to using X-Frame-Options in .net core can be found here.

X-XSS-Protection

X-XSS-Protection is a header that activates a browsers own XSS protection. All modern browsers have inbuilt XSS filtering capabilities that try and catch XSS vulnerabilities before the page is fully rendered to you. By default these are turned on in the browser, but a user may go ahead and switch them off. By using the X-XSS-Protection header, you can actually tell the browser to ignore what the user has done and enforce the inbuilt filter.

By using the “1;mode=block” setting, you can actually go further and stop the entire document from rendering if a XSS attack is found anywhere. (By default it removes only the portion of the document that it detects as XSS).

Infact, “1;mode=block” should be the only setting you use for X-XSS-Protection. Using only “1” could actually be a vulnerability in of itself. For more read here : http://blog.innerht.ml/the-misunderstood-x-xss-protection/

Another thing to note is that the browsers XSS filters are routinely bypassed by new ways of sneaking things through. It’s never going to be 100%. But it’s always a base level of protection that you would be silly to not utilize.

A guide to using X-XSS-Protection in .net core can be found here.

X-Content-Type-Options

X-Content-Type-Options is a header I often see missed because in of itself, it’s not an exploit per-say, but it’s usually paired up with another vulnerability to create something much more damaging. A good example is the Json Array Vulnerability (Now fixed) that was around for a very long time. It allowed browsers to run a JSON array as javascript, something that would not be allowed had X-Content-Type-Options been set.

The header stops a browser “guessing” or ignoring what the mime type of a file is and acting on it accordingly. For example if you reference a resource in a <script> tag, that resource has to have a mime-type of javascript. If instead it returns text/plain, your browser will not bother running it as a script. Without the header, your browser really doesn’t care and will attempt to run it anyway.

A guide to using X-Content-Type-Options in .net core can be found here.

Content Security Policy

While you are reviewing security, why not take a look at implementing a Content Security Policy (CSP). While it doesn’t have as large of browser support as the previous headers mentioned, it is the future of securing browsers and wouldn’t hurt to add. After all, when it comes to security you want to close all the doors, not just the ones you think people will actually walk through.

X-Content-Type-Options is a header that tells a browser to not try and “guess” what a mimetype of a resource might be, and to just take what mimetype the server has returned as fact.

At first this header seems kinda pointless, but it’s one of the simplest ways to block attack vectors that use javascript. For example, for a long time browsers were susceptible to a very popular JSON Array Vulnerability. The vulnerability essentially allowed an attacker to make your browser request a non-javascript file from a site, but run it as javascript, and in the process leak sensitive information.

For a long time, anything within a script tag like so :

Would be run as javascript, irrespective of whether the return content-type was javascript or not. With X-Content-Type-Options turned on, this is not the case and if the file that was requested to be run does not have the javascript mime-type, it is not run and you will see a message similar to below.

Refused to execute script from ‘http://somesite.com/not-a-javascript-file’ because its MIME type (‘application/json’) is not executable, and strict MIME type checking is enabled.

On it’s own the header might not block much, but when you are securing a site you close every door possible.

Another popular reason for using this header is that it stops “hotlinking” of resources. Github recently implemented this header so that people wouldn’t reference scripts hosted in repositories.

X-Content-Type-Options Settings

X-Content-Type-Options : nosniff
This is the only setting available for this header. It’s an on/off type header with no other “settings” available.

Setting X-Content-Type-Options At The Code Level

Like other headers, the easiest way is to use a custom middleware like so :

Setting X-Content-Type-Options At The Server Level

When running .net core in production, it is very unlikely that you will be using Kestrel without a web server infront of it (Infact you 100% should not be), so some prefer to set headers at the server level. If this is you then see below.

Setting X-Content-Type-Options in IIS

You can do this in Web.config but IIS Manager is just as easy.

  1. Open IIS Manager and on the left hand tree, left click the site you would like to manage.
  2. Double click the “HTTP Response Headers” icon.
  3. Right click the header list and select “Add”
  4. For the “name” write “X-Content-Type-Options” and for the value “nosniff”

Setting X-Content-Type-Options in Apache

In your httpd.conf file you need to append the following line :

Setting X-Content-Type-Options in htaccess

Setting X-Content-Type-Options in NGINX

In nginix.conf add the following line. Restart NGINIX after.