There are two types of people when it comes to Database Migrations. Those that want a completely automated process in their CI pipeline and have very strong opinions on what tool to use it, and then there are those who couldn’t care less and just want to run things by hand (Seriously, those still exist).

This article will try and focus on the former, on what tools best fit into the .net core ecosystem. I think most of it isn’t specific to .net core, but there are a few extra considerations to take into account.

First let’s take a look at what I think are the 4 main principles of a database migration suite (Or any step in a CI pipeline).

Repeatable

You should be able to standup a database from scratch and have it function exactly as the existing production database does. This may include pre-seeding data (such as admin users) right from the get go. Repeatable also means that if for some reason the migration runs again, it should be “aware” of what it’s already done and not blow up trying to run the same scripts over and over.

Automated

As much as possible, a good database migration strategy should be automated. When trying to reduce the element of human error, you should just remove the humans all together.

Scalable

There are actually two parts to this. Scalable means that as the database gets bigger, your migration doesn’t start falling over. This also means that the tooling you might be using to generate diffs or actually run the migrations doesn’t start dying too. But scalable also has a second meaning, and that is scalable with your team. That means as your team reaches large proportions on a single project (Say close to a dozen developers), does managing migrations and potential code conflicts get out of control.

Flexible

For database migrations, flexible is all about is the tooling good enough to handle all sorts of database changes, not just a schema change. If you ever split a field, you will have to migrate that data somehow. Other times you may want to mass update data to fix a previous bug, and you want this to be automated along with your database rollout, not run manually and forgotten about.

Database Project/SQL Project in Visual Studio

I remember the first few times I used a database project to update a database schema, it felt like magical. Now it feels like it has it’s place for showing a database state, but it’s migration ability is severely lacking. In terms of how a dbproj actually processes an update, it compares the existing database schema to the the desired state, and then generates scripts to get it there and runs them. But let’s think about that for a second, there may be many “steps” that we want to do to get us to the desired schema, not a huge leap forward. It’s less of a step by step migration and more of a blunt tool to get you to the end result the fastest.

Because of the way a dbproj processes migrations, it also makes updating actual data very hard to do. You may have a multi step process in which you want to split a column slowly by creating a temporary field, filling the data into there, and then doing a slow migration through code. It doesn’t really work because a dbproj is more of a “desired state” migration.

That being said, database projects are very good at seeing how a database has evolved over time. Because your database objects become “code” in your source control, you can see how over columns/views/stored procedures have been added/remove, and with the associated check in comment too.

It should also be noted that DB Projects are not multi platform (Both in OS and database). If you are building your existing .net core project on Linux then database projects are out. And if you are running anything but Microsoft SQL Server, database projects are also out.

Repeatable : No (Migration from A -> B -> C will always work the same, but going straight from A -> C may give you weird side effects in data management).
Automated : Yes
Scalable : Yes
Flexible : No – Data migrations are very hard to pull off in a repeatable way

Entity Framework Migrations

EF Migrations are usually the go to when you are using Entity Framework as your data layer. They are a true migration tool that can be started from any “state” and run in order to bring you to the desired state. Unlike a dbproj, they always run in order so you are never “skipping” data migrations by going from state A -> C. It will still run A -> B -> C in order.

EF Migrations are created using their own fluent API in C# code. For some this feels natural, but others feel limited in what they can achieve trying to control a complex database with a subset of SQL commands that have been converted to the DSL. If you think about complex covering indexes where you have multiple columns along with include columns etc, there is probably some very complex way to do it via C# code, but for most people it becomes a hassle. Add to the fact that ORM’s in general “hide” what queries they are actually running, so now you are also hiding how your database is actually tuned and created,  it does make some people squirm.

But the biggest issue with EF Migrations is the actual migration running itself. Entity Framework is actually “aware” of the state of the database and really throws its toys out of its cot if things aren’t up to date. It knows what migrations “should” have run, so if there are pending migrations, EF really does have a hissy fit. When you are running blue/green deployments, or you have a rolling set of web servers that need to be compatible with your old and new database schema, EF Migrations simply do not work.

Repeatable : Yes
Automated : Yes
Scalable : No
Flexible : Sort Of – You can migrate data and even write custom SQL in a EF Migration, but for the most part you are limited to the EF fluent API

DB Up

Full disclosure, I love DB Up. And I love it even more now that it has announced .net core support is coming (Or already here depending on when you read this).

DB Up is a migration framework that uses a collection of SQL files, and runs them in order from start to finish. From any existing state to any desired state. Because they are just plain old SQL files, every SQL command is available to you, making DB Up easily the most powerful migration tool in your arsenal for dealing with extremely complex databases. Data migrations also become less of a pain because you simply have every single SQL tool available.

In a CI scenario, DB Up is able to build a database from scratch to any point in time, meaning that testing your “new” C# code on your “old” database is now a breeze. I can’t tell you how many times this has saved my ass in a large team environment. People are forever removing columns in a SQL database before removing them from code, causing the old code to error out and crash. In large web deployment scenarios where there are cross over periods with a new database schema being in play, but old web code running on top of it, it’s a god send.

The other great thing about DB Up is that it’s database agnostic (To a degree). It supports Postgres, MYSQL, Firebird, SQL Azure and of course SQL Server. This is great news if you are running your .net core code on something like Postgres.

But, there is a big flaw when using something with DB Up. It becomes hard to see how a particular table has changed over time. Using DB Up all you have is a collection of scripts that when run in order, give you the desired result, but using source control it becomes difficult to see how something has evolved. In terms of migrations, this isn’t a big deal. But in terms of being able to see an overall picture of your database, not so great.

Repeatable : Yes
Automated : Yes
Scalable : Yes
FlexibleYes

Hybrid Approach

I tend to take a hybrid approach for most projects. DB Up is far and away the best database migration tool, but it is limited in seeing the overall state of the database. That’s where a database project comes in. It’s great for seeing the overall picture, and you can check back through source control to see how things have changed over time. But it’s migration facilities are severely limited. With that, it becomes a great fit to use both.

And You?

How do you do your migrations? Do you use one of the above? Or maybe something like Fluent Migrator? Drop a comment below!

With .net core comes a new way to build and run unit tests with a command line tool named “dotnet test”. While the overall syntax of writing tests using MSTest, XUnit or NUnit hasn’t changed, the tooling has changed substantially from what people are used to. There is even a couple of gotchas that are enough to cause migraines if you aren’t careful.

Annoyingly, many of the packages you need to get started are in pre-release, and much of the documentation is locked into the old style DNX tooling. This article focuses on .net Core 1.1 – Preview 2. If you are already running Preview 4 then much of this will still apply, but you will be working with csproj files instead of project.json. Once Preview 4 gets a proper release, this article will be updated to reflect the (likely) many changes that go along with it.

Let’s get started.

The Basics Using MSTest

We will go over the basics using the MSTest framework. If you prefer to use NUnit or XUnit there are sections below to explain how they run within dotnet test, but still read through this section as it will give you the general gist of how things work.

Create a .net core console project in your solution. Now you are probably used to creating a class library for unit tests or maybe even a “unit test” project. We do not use the “Unit Test” project type as there is no such type for .net core, only the full framework. And we do not use a class library only for the reason that it defaults the target framework to .net standard, not .net core. It’s not that we are going to be building a console application per-say, it’s that the defaults for a console application in terms of project.json are closer to what we need.

Once the project is created, if you are on .net core 1.1 or lower, open up project.json. Delete the entire “buildOptions” node. That’s this node :

You can also go ahead and delete the Program.cs file as we won’t be needing it.

Inside that project, you need to install a couple of packages.

You first need to install the MSTest Framework package from here. At this point in time it is still in Pre-Release so you need the “Pre” flag (At some point this will change).

And now install the actual runner package from here. Again, it’s in Pre-Release so you will need the “Pre” flag.

Now go back to your project.json and add in a “testrunner” node at the top level called “mstest”. All in all, your project.json should look like the following :

Great! Now, for the purpose of this article we will create a simple class called “UnitTests.cs”, and add in a couple of tests. That file looks like the following :

Open a command prompt in your unit test directory. Run “dotnet test”. You should see something like the following :

And you are done! You now have a unit test project within .net core.

Using NUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in your NUnit nuget package by running the following from the package manager console :

Now you need to add the NUnit test runner. Install the following nuget package. Note that at the time of this article, the package is only in pre-release so you need the Pre flag :

Inside your project.json, you need a top level node called “testRunner” with the value of “nunit”. If you’ve followed the steps carefully, you should have a project.json that resembles the following :

For the purpose of this article, I have the following test class :

Open a command prompt inside your project folder and run “dotnet test”. You should see something similar to the following :

Using XUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in XUnit Core nuget package to your project. In your package manager console run :

Now add in the XUnit test runner with the following command  in your package manager :

Open up your project.json. Add in a top level node named “testrunner” with the value of “xunit”. All going well, your project.json should resemble something close to the following :

For this article, I have the following test class :

Open up a command prompt in your project folder. Run the command “dotnet test” and you should see something similar to the following :

GZIP is a generic compression method that can be applied to any stream of bits. In terms of how it’s used on the web, it’s easiest thought of a way that makes your files “smaller”. When applying GZIP compression to text files, you can see anywhere from 70-90% savings, on images or other files that are usually already “compressed”, savings are much smaller or in some cases nothing. Other than eliminating unnecessary resource downloads, enabling compression is the next best way to improve the loading speed of your web app.

Enabling At The Code Level (Dynamic)

You can dynamically GZIP text resources on request. This is great for content that may change (Think dynamic html, text, not JS, CSS). The reason it’s not great for static content such as CSS or JS, is that those files will not change between requests, or really even between deployments. There is no advantage to dynamically compressing these each time they are requested.

The code you need to make this work is contained in a nuget package. So you will need to run the following from your package manager console.

In your ConfigureServices method in your startup.cs, you need to add a single line to add the dependencies that GZIP compression is going to need.

Note that there are a couple of options you may wish to use in this call.

The EnableForHttps flag does exactly what it says. It enables GZip compression even over SSL (By default this is switched off). The second turns on (Or off in some cases) GZip of certain mimetypes. The default list at the time of writing is the following (Taken from the Github source here) :

Then in your configure method of startup.cs, you then just need a single line to get going.

Note that the order is very important! While in this case we only have two pieces of middleware that don’t cause issues, other middleware (such as the Static Content middleware), actually send the response back to the user before GZip has occured if it’s first in the list. For safety sake, just keep UseResponseCompression as the first middleware in your list.

To test you have everything up and running just fine, try loading your site and watching the requests go past. You should see the Content-Encoding header come back as “Gzip”. In Chrome it looks like the following :

Enabling At The Code Level (Static)

The issue with enabling GZip compression inside .net core is that if your content is largely static (As in it doesn’t change), then it’s being re-compressed over and over again. When it comes to javascript and CSS, there is really no reason for this to be done on the fly. Instead we can GZip these files beforehand and deploy them “pre” GZip’d.

For this you will need to use a task runner such as Gulp to GZIP your static files in your pipeline. There is a very simple gulp package for this named “gulp-gzip”. Have a search around and get used to how it works. But your gulp file should look something similar to the following :

This is a very simple case, you will need to know a bit more about Gulp to customize it to your needs.

However importantly, you need to map Gzip files to actually return their real result. First install the static file middleware. Run the following from your Package Manager console.

Now in your Configure method in startup.cs, you need to add some code like so :

What does this do? Well first it says that serving static files without running the MVC gamut is A-OK. But next it says, if you come across a file that ends in .js.gz, you can send it out, but instead tell the browser that it’s actually javascript and that it’s just been encoded with gzip. Otherwise it just returns the file as a plain old GZip file that a user could download for example.

Next on your webpage, you actually need to be referencing the .gz file not the .js file. So it should look something like so :

And that’s it, your static content is now compressed!

Enabling At The Server Level

And lastly, you can of course always enable GZip compression at the web server level (IIS, NGinx, Apache etc). For this it’s better to consult the relevant documentation. You may wish to do this as it keeps the configuration out of code (And allows a bit more easier access for configuration on the fly).

One of the first things people notice when making the jump from the full .net Framework to .net Core is the inbuilt dependency injection. Having a DI framework right there ready to be used means less time trying to set up a boilerplate project.

Interestingly, a pattern that emerged from this was one whereby an extension of the service collection was used to add services. For example, you might see something like this :

But what is actually inside the “AddMvc” call? Jumping into the source code (And going a few more levels deeper) it actually looks like this :

So under the hood it’s just doing plain old additions to the service collection, but it’s moved away into a nice tidy “add” call. The pattern might have been around for a while but it was certainly the first time I had seen it. And it’s certainly easy on the eye when looking into your startup.cs. Instead of 100’s of lines of adding services (Or private method calls), there is just a couple of extensions being used.

Building Your Own Extension

Building your own extension is simple. A glimpse into the project structure I’ll use for this demo is here :

All pretty standard. I have a Services folder with MyService and it’s interface inside it. Inside there we have a file called “ServicesConfiguration”. The contents of that file looks like the following :

So this has a single method that is marked as static (important!). The method accepts a param of IServiceCollection as “this”, also very important. Inside here, we simply bind variables how we normally would any other way.

Now if we go to our startup.cs, We can add a call to this method like so :

Obviously as we add more services, our startup.cs doesn’t change it stays nice and clean. We simply add more to our ServicesConfiguration.

…But Is It A Good Pattern?

There are a few caveats I feel when using this pattern. And both are to do with building libraries or sharing code.

You are locked into the ServiceCollection DI

This isn’t necessarily true since another developer could simply choose not to call the “AddXYZServices” method, but it’s likely going to cause a lack of documentation around how things work. I would guess that most documentation would boil down to “Now just add services.AddXYZServices() and you are done” rather than actually go into detail about the interfaces and classes that are required.

IServiceCollection is not .net Standard

IServiceCollection is not part of the .net standard. Therefore if you wish to create a library that can be used cross .net platform (e.g. in Xamarin, .net Framework, Mono etc) you cannot use these sorts of extension methods.

Clickjacking, XSS and CSRF, exploits that have been around for 15+ years now and still form the basis for many vulnerabilities on the web today. If you spend any time around bug bounty programs you will notice similar patterns with these exploits, that many could have been prevented with just a few HTTP Headers in place. A website that goes into production without these is asking for an exploit. Even if they have “other mitigations” in place, on most platforms it takes all of 5 minutes to add in a custom header which could save your site from an unfortunate vulnerability.

These headers should be on any “go live” checklist unless there is a specific reason why they can’t be (For example it’s an internal site that is going to be framed elsewhere, you may not want to disallow frames). While this site focuses on .net core, these headers are language and web server agnostic and can be used on any site.

X-Frame-Options

X-Frame-Options tells the browser that if your site is placed inside a HTML frame, to not render anything. This is super important when trying to protect yourself against click jacking attempts.

In the early days of the web, Javascript frame breakers were all the rage to stop people from framing your site. Ever seen this sort of code?

It was almost a staple in the early 2000’s. But like most things Javascript, people found ways to get around it and invented a frame breaker breaker. Other issues included human error aka ensuring that the frame breaker code was actually on every page of your site. That one page you forgot to include the javascript snippet on? Yeah that’s the exact page someone used for clickjacking. Happens all the time!

A http header that disallows frames server wide with full browser support should be your go to even if you want to use a javascript framebreaker side by side.

X-Frame-Options is most commonly set to be “SameOrigin”, that is that the website can frame itself. But unless you actually have a need for this, the option of “Deny” is your best bet.

A guide to using X-Frame-Options in .net core can be found here.

X-XSS-Protection

X-XSS-Protection is a header that activates a browsers own XSS protection. All modern browsers have inbuilt XSS filtering capabilities that try and catch XSS vulnerabilities before the page is fully rendered to you. By default these are turned on in the browser, but a user may go ahead and switch them off. By using the X-XSS-Protection header, you can actually tell the browser to ignore what the user has done and enforce the inbuilt filter.

By using the “1;mode=block” setting, you can actually go further and stop the entire document from rendering if a XSS attack is found anywhere. (By default it removes only the portion of the document that it detects as XSS).

Infact, “1;mode=block” should be the only setting you use for X-XSS-Protection. Using only “1” could actually be a vulnerability in of itself. For more read here : http://blog.innerht.ml/the-misunderstood-x-xss-protection/

Another thing to note is that the browsers XSS filters are routinely bypassed by new ways of sneaking things through. It’s never going to be 100%. But it’s always a base level of protection that you would be silly to not utilize.

A guide to using X-XSS-Protection in .net core can be found here.

X-Content-Type-Options

X-Content-Type-Options is a header I often see missed because in of itself, it’s not an exploit per-say, but it’s usually paired up with another vulnerability to create something much more damaging. A good example is the Json Array Vulnerability (Now fixed) that was around for a very long time. It allowed browsers to run a JSON array as javascript, something that would not be allowed had X-Content-Type-Options been set.

The header stops a browser “guessing” or ignoring what the mime type of a file is and acting on it accordingly. For example if you reference a resource in a <script> tag, that resource has to have a mime-type of javascript. If instead it returns text/plain, your browser will not bother running it as a script. Without the header, your browser really doesn’t care and will attempt to run it anyway.

A guide to using X-Content-Type-Options in .net core can be found here.

Content Security Policy

While you are reviewing security, why not take a look at implementing a Content Security Policy (CSP). While it doesn’t have as large of browser support as the previous headers mentioned, it is the future of securing browsers and wouldn’t hurt to add. After all, when it comes to security you want to close all the doors, not just the ones you think people will actually walk through.

X-Content-Type-Options is a header that tells a browser to not try and “guess” what a mimetype of a resource might be, and to just take what mimetype the server has returned as fact.

At first this header seems kinda pointless, but it’s one of the simplest ways to block attack vectors that use javascript. For example, for a long time browsers were susceptible to a very popular JSON Array Vulnerability. The vulnerability essentially allowed an attacker to make your browser request a non-javascript file from a site, but run it as javascript, and in the process leak sensitive information.

For a long time, anything within a script tag like so :

Would be run as javascript, irrespective of whether the return content-type was javascript or not. With X-Content-Type-Options turned on, this is not the case and if the file that was requested to be run does not have the javascript mime-type, it is not run and you will see a message similar to below.

Refused to execute script from ‘http://somesite.com/not-a-javascript-file’ because its MIME type (‘application/json’) is not executable, and strict MIME type checking is enabled.

On it’s own the header might not block much, but when you are securing a site you close every door possible.

Another popular reason for using this header is that it stops “hotlinking” of resources. Github recently implemented this header so that people wouldn’t reference scripts hosted in repositories.

X-Content-Type-Options Settings

X-Content-Type-Options : nosniff
This is the only setting available for this header. It’s an on/off type header with no other “settings” available.

Setting X-Content-Type-Options At The Code Level

Like other headers, the easiest way is to use a custom middleware like so :

Setting X-Content-Type-Options At The Server Level

When running .net core in production, it is very unlikely that you will be using Kestrel without a web server infront of it (Infact you 100% should not be), so some prefer to set headers at the server level. If this is you then see below.

Setting X-Content-Type-Options in IIS

You can do this in Web.config but IIS Manager is just as easy.

  1. Open IIS Manager and on the left hand tree, left click the site you would like to manage.
  2. Double click the “HTTP Response Headers” icon.
  3. Right click the header list and select “Add”
  4. For the “name” write “X-Content-Type-Options” and for the value “nosniff”

Setting X-Content-Type-Options in Apache

In your httpd.conf file you need to append the following line :

Setting X-Content-Type-Options in htaccess

Setting X-Content-Type-Options in NGINX

In nginix.conf add the following line. Restart NGINIX after.

API Versioning is either something you love or you hate. It’s great for giving developers the ability to improve and iterate on API’s without breaking contracts. At times the stagnation of innovation on an API is simply because of legacy decisions that cannot be reversed, especially on public API’s. But versioning can quickly get out of control, especially with custom implementations.

Microsoft has attempted to alleviate some of the pain with it’s own versioning package which can be used in ASP.net core (And other .net platforms). It can be a bit tricky to get going and it takes a few “aha” moments to get everything sitting right. So let’s see how it works.

Setup

First, you will need to install the following nuget package from your package manager console.

In the ConfigureServices method of your startup.cs, you need to add the API Versioning services like so :

The ReportAPIVersions flag is optional, but it can be useful. It allows for the API to return versions in a response header. When calling an API with this flag on, you will see something like the following.

The flag for “AssumeDefaultVersionWhenUnspecified” (Quite the mouthful), can also be handy especially when migrating an API to versioning. Without it, you will break any existing clients that aren’t specifying an API version at all. If you are getting the error “An API version is required, but was not specified.”, this will be why.

The DefaultApiVersion flag is also not needed in this case because it defaults to 1.0. But I thought it helpful to include it as a reminder that you can “auto update” any clients that aren’t specifying a default API version to the latest. There is pros and cons to doing this, but the option is there if you want it.

URL Query Based Versioning

Take a look at the following basic controllers. They have been decorated with an ApiVersion attribute, and their classes have been updated so they don’t clash with each other.

You’ll notice that the routes are actually the same, so how does ASP.net core determine which class to use?

Actually the default way that the versioning library uses is by using a query string of “api-version” to specify a version.

So when I call /api/home?api-version=2.0, I am returned “Version 2”. When I call /api/home?api-version=1.0 (Or no version at all), I am returned “Version 1”.

URL Path Based Versioning

Query string parameters are nice and easy but don’t always look the best (And they can be a pain for an external client to always tag on). In a big long query they can be missed in the sea of query parameters. A common alternative is to put the version in the URL path so it’s always visible at the start of the URL.

Take the following two controllers :

Now when I call /api/1.0/home, I am returned Version 1. And /api/2.0/home will return me version 2.

Http Header Based Versioning

Specifying a version via an Http Header is a very common way of using Api Versioning. It allows your urls to stay clean without cluttering them with version information.

The defaults in the aspnet versioning package don’t actually support header information. You need to do a bit more work.

In your ConfigureServices method in startup.cs, you need to add the option of an ApiVersionReader to your AddApiVersioning call. Like so :

With this call I have told my API that the header “x-api-version” is now how I define an API version.

One word of warning. The above makes it so that you cannot use query string versioning anymore. So once you set the version reader to use the header, you can no longer specify the version like so /api/home?api-version=2.0

If you wish to use both, you need to use the aptly named “QueryStringOrHeaderApiVersionReader” which frankly is a ridiculous name but I guess it does what it says on the tin.

Version A Single Action

There may come a time when you want to only create a new version of an action, but not the entire controller (Infact there will be plenty of times). There is a way to do this, but a word of warning is that it will mean at some point in the future, you will have a mismatch of controllers, actions and versions etc. It can be hard to manage.

But to version a single action, you can do something that looks like the following :

Essentially we still need to tell the Controller that it supports 2.0, but within the controller we can use the “MapToApiVersion” attribute to tell it to be used with a specific version.

Deprecating A Version

You are able to deprecate an Api version. Note that this does not “delete” the version, it only marks it as deprecated so a consumer can know. When someone calls your API, they will see the following header returned.

But the point is, they can still call that endpoint/version. It does not limit it in any way.

Conventions Based Setup

Up to now we have defined the API Versioning with an attribute. This is fine but it can get out of hand when you have many controllers with different versions with no way to have an “overview” of the versions you have in play. It’s also a little limiting in terms of configuration.

Enter “Conventions” setup. It’s an easy way to define your versioning when adding the APIVersioning services. Take a look at the following configuration :

Pretty self explanatory and now we don’t have to put attributes on our controllers anymore and the configuration is all stored in the same location.

Accessing HTTP Version

There may come a time when you actually want to know what API Version was requested. While I would recommend not going down the route of large switch statements to find services (Or passing them into factories etc), there may be an occasion when you have a legitimate use for it.

Luckily, HttpContext has a method called “GetRequestedApiVersion” that will return you all the version info.

Opting Out Of Versioning

A scenario that is becoming more and more common is having a single project that works as an API and an MVC app all in one. When you add versioning to your API, your MVC app suddenly becomes versioned also. But there is a way to “opt out” if you will.

Take the following example :

Now actually, if you pass in a version of 2.0, this action will actually return 2.0. It still knows the version and can read it, it simply doesn’t care. Thus if you “force” a version number or minimum version, this controller is unaffected.

Anything I missed?

Have I missed something? Feel free to comment below.

HttpOnly is a flag that can be used when setting a cookie to block access to the cookie from client side scripts. Javascript for example cannot read a cookie that has HttpOnly set. This helps mitigate a large part of XSS attacks as many of these attempt to read cookies and send them back to the attacker, possibly leaking sensitive information or worst case scenario, allowing the attacker to impersonate the user with login cookies.

If you are interested in reading more on the background of HttpOnly cookies, OWASP has a great article here explaining them in more detail : https://www.owasp.org/index.php/HttpOnly

Now, onto how these can used in .net core.

Defaults Are Gone

An important thing to note in .net core compared to the .net framework, is that while previously you were able to set global defaults, you can no longer do this. For example in .net framework you were able to add the following to your web.config :

This would make sure that any cookies set by your application were HttpOnly. Obviously web.config is more or less out the window with .net core (Although if you are hosting on IIS you can still use it), and Microsoft hasn’t added in a global default able to be set yet. This may change in the future however because it was definitely a handy setting.

Setting A Cookie Manually

When setting a cookie manually (e.g. against an HTTPContext), there is an easy CookieOptions object that you can use to set HttpOnly to true. It ends up looking a bit like this :

When Using Cookie Authentication

Microsoft have a middleware that uses cookies for Authentication. If you were to use it in your app, you add it in the Configure method of your startup.cs.

If you are using CookieAuthentication in this way, HttpOnly cookies will be used by default. (You can check the source code here on Github). If you actually need this functionality off (Dangerous, but it’s a possibility), then you can override the functionality like so :

When Using XYZ Middleware

Because there is no global option for HttpOnly cookies, when using a third party middleware you are at their mercy as to how they set their cookies and whether they are HttpOnly or not. In some cases they may not be HttpOnly when you want them to be, and even vice versa when you actually need the cookie to be accessible. If you are building your own middleware that you intend to share as a library, the best option is leaving the default as HttpOnly set to true, and allowing the user to override it if they really feel the need.

You’ve upgraded your latest project to the very latest version of .net core, but you can’t seem to find the correct SMTPClient namespace? It used to live in System.Net.Mail, but it’s just gone *poof*. What gives?

System.Net.Mail Is Not Ported (yet)

If you are on a .net core version 1.1 or less (Or you are working on a .net platform that implements the .net standard 1.6 or less), you will not have access to System.Net.Mail, which by extension means you will not have access to the SmtpClient class, or anything to read POP3/IMAP messages etc. They were not ported across (yet). Bummer!

However, on the Microsoft Github there is a pullrequest here for the port of System.Net.Mail that looks like it made it into .net Standard 2.0. That would point to the next version of .net core having SmtpClient back in. The release date is looking like early 2017.

So In The Meantime?

In the meantime many people are using the MailKit. It’s a very powerful library with a very similar interface/api to the original .net System.Net.Mail. In most cases you should be able to plug it in without too much hassle.

X-XSS-Protection is a header that can be set on a webpage to activate “limited” XSS protection in certain browsers. At the time of writing, the header is available in all modern browsers except Firefox.

If you aren’t up to speed on what XSS is, have a quick read of this wikipedia article first then come back.

Great, now let’s first take a look at what browsers do out of the box. All browsers use static analysis to detect XSS attacks. They are rather vague about how they offer this protection, but usually it’s protecting against the most basic attacks. A good writeup on how Chrome’s protection has evolved over time (And still getting bypassed) can be found here : https://blog.securitee.org/?p=37. Hopefully that should give you an idea of the sort of things the browser will natively protect against.

Now usually the browser has the XSS filter turned on by default, but using the header should enforce it. There are also a couple of other values to use to extend the functionality of the header.

X-XSS-Protection Settings

X-XSS-Protection: 0
Disables XSS protection (Handy when you may want to test out XSS on your own)

X-XSS-Protection: 1
Enables XSS protection. If XSS is detected, the browser attempts to filter or sanitize the output, but still renders it for the most part.

X-XSS-Protection: 1; mode=block
Enables XSS protection and if XSS is detected, the browser stops rendering altogether.

X-XSS-Protection: 1; report=<reporting-uri>
Report works only in Chromium browsers (But can be used to enforce protection in other browsers). You can have a callback that lets you know about XSS attempts.

Setting X-XSS-Protection at the Code Level

Similar to adding any other default header to your app, you can add a Use statement to the Configure method in your startup.cs like so :

And you’re done!

Setting X-Xss-Protection at Server level

If you are using IIS or any other web server infront of kestrel, you can also set headers there. There are different requirements for each server.

Setting X-XSS-Protection in IIS

The best way to do this if you are just using IIS to forward requests to Kestrel (Or even if this is actually being hosted in IIS), is to do this in IIS Manager.

  1. Open IIS Manager and on the left hand tree, left click the site you would like to manage.
  2. Doubleclick the “HTTP Response Headers” icon.
  3. Right click the header list and select “Add”
  4. For the “name” write “X-Xss-Protection” and for the value write in your desired option e.g. “1”.

Setting X-XSS-Protection in Apache

In your httpd.conf file you need to append the following line :

Setting X-XSS-Protection in htaccess

Setting X-XSS-Protection in NGINX

In nginix.conf add the following line. Remember to restart the service after!