Middleware is the new “pipeline” for requests in asp.net core. Each piece of middleware can process part or all of the request, and then either choose to return the result or pass on down to the next piece of middleware. In the full ASP.net Framework you were able to specify “HTTP Modules” that acted somewhat like a pipeline, but it was hard to really see how the pieces fit together at times.

Anywhere you would normally write an HTTP Module in the full ASP.net Framework is where you should probably now be using middleware. Infact for most places you would normally use MVC filters, you will likely find it easier or more convenient to use middleware.

This diagram from Microsoft does a better job than I could at seeing how pipelines work. Do notice however that you can do work both before and after the pass down to the next middleware. This is important if you wish to affect results as they are going out rather than results that are coming in.

Basic Configuration

The easiest way to get started with middleware is in the Configure method of your startup.cs file. In here is where you “chain” your pipeline together. It may end up looking something like the following :

Now an important thing to remember is that ordering is important. In this pipeline the static files middleware runs first, this middleware can choose to pass on the request (e.g. not process the response in its entirety) or it can choose to push a response to the client and not call the next middleware in the chain. That last part is important because it will often trip you up if your ordering is not correct (e.g. if you with to authenticate/authorize someone before hitting your MVC action).

App.Use

App.Use is going to be the most common pipeline building block you will come across. It allows you to add on something to the response and then pass to the next middleware in the pipeline, or you can force a short circuit and force a return result without passing to the next handler.

To illustrate, consider the following

In this example, the pipeline adds a response header of X-Content-Type-Options and then passes it to the next handler. The next handler adds a response text of “Hello World!” then the pipeline ends. In this example because we are not calling Next() we actually don’t pass to the next handler at all and execution finishes. In this way we can short circuit the pipeline if something hasn’t met criteria. For example if some authorization key is not present.

App.Run

You may see App.Run appear around the place, it’s like App.Use’s little brother. App.Run is an “end of the line” middleware. It can handle generating a response, but it doesn’t have the ability to pass it down the chain. For this reason you will see App.Run middleware at the end of the pipeline and no where else.

App.Map

App.Map is used when you want to build a mini pipeline only for a certain URL. Note that the URL given is used as a “starts with”. So commonly you might see this in middleware for locking down an admin area. The usage is pretty simple :

So in this example when someone goes to /helloworld, they will be given the Hello World! message back.

App.MapWhen

This is similar to Map, but allows for much more complex conditionals. This is great if you are checking for a cookie or a particular query string, or need more powerful Regex matching on URLs. In the example below, we are checking for a query string of “helloworld”, and if it’s found we return the response of Hello World!

Building A Middleware Class

And after all that we going to toss it in the bin and create a middleware class! Middleware classes use quite a different syntax to all of the above, and really it just groups them all together in a simple pattern.

Let’s say I want to build a class that will add common security headers to my site. If I right click inside my .net core web project and select “Add Item”, there is actually an option to add a middleware class template.

If you can’t find this, don’t fear because there is absolutely nothing about using the template. It just provides the basic structure of how your class should look, but you can copy and paste it from here into a plain old .cs file and it will work fine. Our security middleware looks something like the following :

Very simple. In our invoke method we do all the work we want to do and pass it onto the next handler. Again, we can short circuit this process by returning an empty Task instead of calling next.

Another thing to note is that this is instantiated using the service container. That means if you have a need to access a database or call in settings from an external source, you can simply add the dependency in the constructor.

Important : App middleware is constructed at startup. If you require scoped dependencies or anything other than singletons, do not inject your dependencies into the instructure. Instead the you can add dependencies into the Invoke method and .net core will work it out. Again, if you need scoped dependencies add these to your Invoke method not your constructor.

The Extensions part is optional, but it does allow you to write code like this :

 

Straight into the important links.

Download here
Release Notes here

There are a tonne of blogs out there talking about the release, so let’s just jump straight into the 5 features that I think developers will be most excited about. There is a tonne of new stuff coming out, but I think these are the features that will impact developers lives on an almost daily basis.

Inbuilt Unit Testing On The Fly

There has been addons for years that run unit tests as you type and let you know if your unit tests are falling apart, well now Microsoft has added the support to Visual Studio natively! Again Microsoft have outdone themselves by supporting not only MSTest, but XUnit and NUnit are also supported.

Run To Click

A new feature in the debugger, when stopped at a breakpoint in your code, you can now continue execution to a “point” where your mouse clicks rather than having to place a breakpoint every few lines and step through.

Redgate SQL Search

Built into every version of Visual Studio (Including the free Community version), is Redgate’s SQL Search product. How many times have you wanted to know whether a column is referenced anywhere, and you are faced with the daunting task of searching through a massive lump of Stored Procedures? Redgate SQL search takes care of this by allowing you to do a text search across every database object. It really is an awesome product and it’s great that it’s now part of Visual Studio.

Intellisense Filter

Intellisense now has a tray that allows you to filter the member list by type. This means that when you are working with an unfamiliar library (Or a new code base), and you know that “there should be a method that does XYZ”, you can now filter to only methods and ignore all properties. This feature alone makes upgrading to Visual Studio 2017 worth it. Note that the feature is not enabled by default, to do so : go to Tools > Options > Text Editor > [C# / Basic] > IntelliSense and check the options for filtering and highlighting.

 

Use Of Git.exe

Up until Visual Studio 2017, the Git implementation in VS was built using Github’s libgit2. For most people this was fine, but there were a few features available to you on the command line that weren’t available to you inside Visual Studio. Most notably, the use of SSH keys. While most people who already use Visual Studio’s GIT tools are probably happy with existing functionality, it’s always good to have the tools on parity with what’s already out there.

What Else?

Something else you are excited about? Drop a comment below and let us know!

In a previous post, we talked about how to use a Redis Cache in .net Core. In most large scale scenarios, Redis is going to be your goto. But for tiny sites that have a single web instance, or for sites that really only need a local cache, InMemory caching is much easier to get setup with and obviously does away with wrangling a Redis server.

Interestingly, .net Core currently offers two ways to implement a local in memory cache. We’ll take a look at both.

IMemoryCache

The first option is to use what is simply known in .net core as IMemoryCache. It’s similar to what you may have used in standard ASP.net in terms of storing an object in memory by a key.

First open up your startup.cs. In your ConfigureServices method you need to add a call to “AddMemoryCache” like so :

In your controller or class you wish to use the memory cache, add in a dependency into the constructor. The two main methods you will likely be interested in are “TryGetValue” and “Set”. Both should be rather explanatory in the following code :

That’s the basics, now a couple of nice things that this implementation of a memory cache has that won’t be available in the next implementation I will show you

PostEvictionCallback
An interesting feature is the PostEvictionCallback delegate. This allows you to register an action to be called everytime something “expires”. To use it, it will look something like the following :

Now everytime a cache entry expires, you will be notified about it. Usually there are very limited reasons why you would want to do this, but the option is there should you want it!

CancellationToken
Also supported is CancellationTokens. Again, a feature that probably won’t be used too often, but can be used to invalidate a set of cache all in one go. CancellationTokens are notoriously difficult to debug and get going, but if you have the need it’s there!

The code would look something similar to this :

Distributed Memory Cache

A “distributed” memory cache is probably a bit of an oxymoron. It’s obviously not distributed if it’s sitting local to a machine. But the big advantage to going down this road is that should to intend to switch to using Redis in the future, the interfaces between the RedisDistributedCache and the In Memory one are exactly the same. It’s just a single line of difference in your startup. This may also be helpful if locally you just want to use your machines cache and not have Redis setup.

To get going, in your startup.cs ConfigureServices method, add a call to AddDistributedMemoryCache like so :

Your controller or class where you inject it should look a bit like the following. Again, this code is actually taken directly from our Redis Cache tutorial, the implementation is exactly the same for InMemory, it’s only the call in ConfigureServices in your startup.cs that changes.

Now, looking at what we said about IMemoryCache above, PostEvictionCallback and CancellationTokens cannot be used here. This makes sense because this interface for the most part is supposed to be used with distributed environments, any machine in the environment (Or the cache itself) could expire/remove a cache entry.

Another very important difference is that while IMemoryCache accepts C# “objects” into the cache, a distributed cache does not. A distributed cache can only accept byte arrays or strings. For the most part this isn’t going to be a big deal. If you are trying to store large objects just run them through a JSON serializer first and store them as a string, when you pull them out deserialize them into your object.

Which Should You Pick?

You are likely going to write an abstraction layer on top of either caching interface meaning that your controllers/services aren’t going to see much of it. For what you should use in that abstraction, I tend to go with the Distributed Cache for no other reason that should I ever want to move to using Redis, I have the option there.

Are you getting the error as seen in the title when creating a new project in Visual Studio 2015?

The following error occured attempting to run the project model server process (1.0.0-rc4-004771).

Unable to start the process. No executable found matching command “dotnet-projectmodel-server”

And in image form :

Chances are this scenario has played out in the past day or so.

  • You have previously used .net core tooling using project.json in Visual Studio 2015
  • You installed the latest .net core tooling (RC4 or higher) that uses csproj files
  • OR you installed Visual Studio 2017
  • You are now using Visual Studio 2015 to create a new project

There are two ways to “fix” this.

The “I Don’t Really Care” Method

If the difference between .csproj and project.json tooling is beyond you and you really don’t care. Just use Visual Studio 2017 from now on to create .net core projects. Yes that seems very silly, but it is the much easier approach and for many it’s not going to make too much of a difference. You can still open existing project.json .net core projects in Visual Studio 2015 (Provided they have a global.json. see below), just not create new ones.

The “I Want To Actually Fix This” Method

So the reason this error occurs is that Visual Studio 2015 is not able to use the very latest .net core tooling (Currently RC4). When you create a new project *or* when you open an existing project without a global.json file specifying an older tooling version, .net core tries to use the latest tooling SDK installed on your machine.

I highly recommend reading this article on running two versions of the .net core tooling SDK side by side to get a better idea of how this works. It will allow you to fix these issues in the future. 

The fix to actually getting your project to open is by creating a global.json file in the root of your solution with the following contents.

Where the SDK version is your previous SDK version you were running. Again, if you are unsure about this please read this article on running two versions of the .net core SDK tooling. It’s actually super surprising how many issues can be resolved by knowing how the SDK versioning works.

There are two types of people when it comes to Database Migrations. Those that want a completely automated process in their CI pipeline and have very strong opinions on what tool to use it, and then there are those who couldn’t care less and just want to run things by hand (Seriously, those still exist).

This article will try and focus on the former, on what tools best fit into the .net core ecosystem. I think most of it isn’t specific to .net core, but there are a few extra considerations to take into account.

First let’s take a look at what I think are the 4 main principles of a database migration suite (Or any step in a CI pipeline).

Repeatable

You should be able to standup a database from scratch and have it function exactly as the existing production database does. This may include pre-seeding data (such as admin users) right from the get go. Repeatable also means that if for some reason the migration runs again, it should be “aware” of what it’s already done and not blow up trying to run the same scripts over and over.

Automated

As much as possible, a good database migration strategy should be automated. When trying to reduce the element of human error, you should just remove the humans all together.

Scalable

There are actually two parts to this. Scalable means that as the database gets bigger, your migration doesn’t start falling over. This also means that the tooling you might be using to generate diffs or actually run the migrations doesn’t start dying too. But scalable also has a second meaning, and that is scalable with your team. That means as your team reaches large proportions on a single project (Say close to a dozen developers), does managing migrations and potential code conflicts get out of control.

Flexible

For database migrations, flexible is all about is the tooling good enough to handle all sorts of database changes, not just a schema change. If you ever split a field, you will have to migrate that data somehow. Other times you may want to mass update data to fix a previous bug, and you want this to be automated along with your database rollout, not run manually and forgotten about.

Database Project/SQL Project in Visual Studio

I remember the first few times I used a database project to update a database schema, it felt like magical. Now it feels like it has it’s place for showing a database state, but it’s migration ability is severely lacking. In terms of how a dbproj actually processes an update, it compares the existing database schema to the the desired state, and then generates scripts to get it there and runs them. But let’s think about that for a second, there may be many “steps” that we want to do to get us to the desired schema, not a huge leap forward. It’s less of a step by step migration and more of a blunt tool to get you to the end result the fastest.

Because of the way a dbproj processes migrations, it also makes updating actual data very hard to do. You may have a multi step process in which you want to split a column slowly by creating a temporary field, filling the data into there, and then doing a slow migration through code. It doesn’t really work because a dbproj is more of a “desired state” migration.

That being said, database projects are very good at seeing how a database has evolved over time. Because your database objects become “code” in your source control, you can see how over columns/views/stored procedures have been added/remove, and with the associated check in comment too.

It should also be noted that DB Projects are not multi platform (Both in OS and database). If you are building your existing .net core project on Linux then database projects are out. And if you are running anything but Microsoft SQL Server, database projects are also out.

Repeatable : No (Migration from A -> B -> C will always work the same, but going straight from A -> C may give you weird side effects in data management).
Automated : Yes
Scalable : Yes
Flexible : No – Data migrations are very hard to pull off in a repeatable way

Entity Framework Migrations

EF Migrations are usually the go to when you are using Entity Framework as your data layer. They are a true migration tool that can be started from any “state” and run in order to bring you to the desired state. Unlike a dbproj, they always run in order so you are never “skipping” data migrations by going from state A -> C. It will still run A -> B -> C in order.

EF Migrations are created using their own fluent API in C# code. For some this feels natural, but others feel limited in what they can achieve trying to control a complex database with a subset of SQL commands that have been converted to the DSL. If you think about complex covering indexes where you have multiple columns along with include columns etc, there is probably some very complex way to do it via C# code, but for most people it becomes a hassle. Add to the fact that ORM’s in general “hide” what queries they are actually running, so now you are also hiding how your database is actually tuned and created,  it does make some people squirm.

But the biggest issue with EF Migrations is the actual migration running itself. Entity Framework is actually “aware” of the state of the database and really throws its toys out of its cot if things aren’t up to date. It knows what migrations “should” have run, so if there are pending migrations, EF really does have a hissy fit. When you are running blue/green deployments, or you have a rolling set of web servers that need to be compatible with your old and new database schema, EF Migrations simply do not work.

Repeatable : Yes
Automated : Yes
Scalable : No
Flexible : Sort Of – You can migrate data and even write custom SQL in a EF Migration, but for the most part you are limited to the EF fluent API

DB Up

Full disclosure, I love DB Up. And I love it even more now that it has announced .net core support is coming (Or already here depending on when you read this).

DB Up is a migration framework that uses a collection of SQL files, and runs them in order from start to finish. From any existing state to any desired state. Because they are just plain old SQL files, every SQL command is available to you, making DB Up easily the most powerful migration tool in your arsenal for dealing with extremely complex databases. Data migrations also become less of a pain because you simply have every single SQL tool available.

In a CI scenario, DB Up is able to build a database from scratch to any point in time, meaning that testing your “new” C# code on your “old” database is now a breeze. I can’t tell you how many times this has saved my ass in a large team environment. People are forever removing columns in a SQL database before removing them from code, causing the old code to error out and crash. In large web deployment scenarios where there are cross over periods with a new database schema being in play, but old web code running on top of it, it’s a god send.

The other great thing about DB Up is that it’s database agnostic (To a degree). It supports Postgres, MYSQL, Firebird, SQL Azure and of course SQL Server. This is great news if you are running your .net core code on something like Postgres.

But, there is a big flaw when using something with DB Up. It becomes hard to see how a particular table has changed over time. Using DB Up all you have is a collection of scripts that when run in order, give you the desired result, but using source control it becomes difficult to see how something has evolved. In terms of migrations, this isn’t a big deal. But in terms of being able to see an overall picture of your database, not so great.

Repeatable : Yes
Automated : Yes
Scalable : Yes
FlexibleYes

Hybrid Approach

I tend to take a hybrid approach for most projects. DB Up is far and away the best database migration tool, but it is limited in seeing the overall state of the database. That’s where a database project comes in. It’s great for seeing the overall picture, and you can check back through source control to see how things have changed over time. But it’s migration facilities are severely limited. With that, it becomes a great fit to use both.

And You?

How do you do your migrations? Do you use one of the above? Or maybe something like Fluent Migrator? Drop a comment below!

In previous versions of .net core, picking which .net framework version you were going to use was as simple as selecting it in the “new project” dialog of Visual Studio. With .net core things have changed somewhat, mostly because while Visual Studio is still the IDE for the majority of developers on windows machines, there is now a myriad of ways to write code with .net core all the way down to using a simple text editor.

The issue of picking a .net core version mostly comes about when you want to test out the latest release candidate version to see what’s new, but you need to keep using an older release for your day to day work. Or it could even be that a very old project uses a previous release, but now you want to start doing all future work on the latest .net core version. All problems that can be solved, even if in a slightly round about way!

How Do I Know Which .Net Core SDK Versions I Have?

The simplest way is to check your SDK folder. First, open a command prompt and type the following :

You should be given a path similar to the following :

Go to that “folder” so in my case C:\Program Files\dotnet\. Inside there you should see another folder labelled “sdk”. Inside here you will find all “installed” SDK versions of .net you have available. Mine looks a bit like the following :


Now these versions probably don’t line up with what you are expecting for the most part. It would make more sense of course to have them named “1.0” “1.1” “1.1.4” etc. But, Microsoft being Microsoft it is what it is. If you are confused, you should be. In the future Microsoft have said they will clean up these version issues, but for now this is what you have. I haven’t been able to find official documentation with a simple table to look up what version of the SDK means what (If you have, drop a comment below!).

There is however this article here (https://github.com/dotnet/core/blob/master/release-notes/download-archive.md). It has a bit more on the version numbers which through a bit of mental gymnastics you can kind of make our what they mean.

I can tell you what those on my machine mean :

1.0.0-preview2-003131 – .net Core 1.0 tooling
1.0.0-preview2-1-003177 – .net Core 1.1 tooling with project.json
1.0.0-rc4-004771 – .net Core tooling with Preview 4 (which is .csproj not project.json).

For the most part when you are finding which versions you have on your machine, you are probably going to be able to match them up with an existing SDK version of a project anyway (More on that later), so don’t get too stressed trying to work out what they mean for now.

What Is My Default .Net Core SDK Version?

By default, the latest SDK version is going to be your default version. When I say default, I mean that if you opened a folder and ran “dotnet new”, what version of the tooling would it use.

But to be 100% sure, open a command prompt that is not inside an actual .net core project and run :

You will get a response like “1.0.0-rc4-004771”. That is your default .net core SDK.

How Do I Know An Existing .Net Core Project’s SDK Version?

In the root of your Solution (Not your project!), look for a file named “global.json”. Inside this file you will find an element that looks like the following :

This tells dotnet what version of the SDK it should be forced to use, irrespective of other tooling you have installed or what is the default. For example if I open a command prompt inside this solution and run “dotnet –version”. I get the response “1.0.0-preview2-003131” even if the default for my machine is RC4 etc.

And if you don’t have a global.json? Then it runs on the latest version on that machine. Unsurprisingly these could cause issues in the future so it is much much better to have a global.json in your solution directory to lock down the SDK version. This is especially true if you are part of a team that may have any amount of SDK’s installed on each machine.

How Do I Create A New Project With A Specfic SDK Version In Command Line?

The above section probably pointed you in the right direction already, but let’s go over it. I create a solution folder and place a global.json file in there with the following contents :

Note that this SDK version is an old version that used project.json files. That will be important shortly.

I then create a folder in the solution directory for my project. Opening a command prompt in the project folder and running “dotnet new -t web”. A web project is created that when I view it, I can see it has project.json files which means it must have read my global.json file (From a directory up no less). I know this because only older versions of the SDK use project.json, if it didn’t bother reading the global.json it would have created a project using .csproj files instead.

Just to be sure, I create a new solution folder and place the following global.json file inside :

Now we are forcing the SDK version to a later version (That uses csproj files).

Again, I create a project folder inside the solution directory. This time I run the following command “dotnet new webapi” (In RC4, the -t flag is removed and you can instead just pass in what type of project you would like directly). Inside our project folder I can see that a API project has been created and it’s using CSProj files not project.json files. Success!

How Do I Create A New Project With A Specfic SDK Version In Visual Studio?

Most people with Visual Studio would presume that when creating a new project in .net core, you are presented with the option of which SDK you want to use. This is not the case. You are asked what version of the .net core framework you would like to use (1.0 or 1.1), but not the tooling (e.g. Whether you wish to run project.json or .csproj). If you create a project from Visual Studio this way, it will use the latest version on your machine at that time and put that into your global.json. This may work just fine for you, but in other cases (such as if you have developers using Project Rider from Jetbrains that only supports Preview 2 of the tooling), you will need to define the SDK ahead of time.

In my case, I actually haven’t found a way to do it. Using Visual Studio 2017, it doesn’t seem to care what is inside the global.json and just creates a file with csproj anyway. This is in comparison to Visual Studio 2015 which will only create projects that use project.json. In this way you can “kind of” force a SDK version by using a specific version of Visual Studio, but it’s not exactly a perfect science.

In my experience, it’s better to just use the command line tools to get you up and going to start with. Once you have a solution created from the command line and you add additional projects, Visual Studio does then recognize the existing global.json SDK.

One of the most overlooked features of .net core is the new “dotnet watch” command. With it, it allows you to have a “live reload” of your ASP.net core site running without having to either run the “dotnet run” command, or worse do the “Stop the process in Visual Studio. Write your changes. Recompile and run again” routine. The latter can be very annoying when all you are trying to do is do a simple one line fix.

If you are used to watches in other languages/tooling (Especially task runners like Gulp), then you will know how much of a boost watches are to productivity.

The Basics

I highly recommend creating a simple ASP.net core project to run through this tutorial with. The tooling can be a little finicky with large projects so it’s easier to get up and running with something small, then take what you’ve learned onto something a little larger.

First you need to install the nuget package that actually runs the watcher tool. Run the following from your package manager console.

For this demo, I have a simple controller that has a get method that returns a string value.

Open a command prompt in your project directory. Run the command “dotnet watch run”. Note that if you previously have run the “dotnet run” command with other flags (e.g. “dotnet run -f net451” to specify the framework), this will still work using the watch command. You should see something similar to the following :

This means you are up and running. If I go to http://localhost:5000/api/home, I should see my controller return “Old Value”.

Now I head back to my controller and I change the Get action to instead return “New Value”. As soon as I hit save on this file I see the following in my console window :

The first line tells us that it immediately picked up on the fact that we changed something in HomeController.cs and so it starts recompiling immediately. When we browse to http://localhost:5000/api/home we see our “New Value” shown immediately and we didn’t have to do anything else in terms of recompiling.

Debugging With Dotnet Watch

And now the big caveat of this feature. There doesn’t seem to be a way to always have your Visual Studio debugger on the latest instance of the running code. The dotnet watch command itself spins up kestrel to host your code, but it’s hard to identify the PID of this particular process. I assume that you should be able to attach your debugger to this process once you find it, but as soon as you edit your code, a new process is created, rendering your debugger useless.

For the most part, errors will flash by on your console window when they happen so should be easy to spot. Then when you actually need the full debugger experience like breakpoints, you will be able to attach it. Just don’t expect it to be “always on”.

With .net core comes a new way to build and run unit tests with a command line tool named “dotnet test”. While the overall syntax of writing tests using MSTest, XUnit or NUnit hasn’t changed, the tooling has changed substantially from what people are used to. There is even a couple of gotchas that are enough to cause migraines if you aren’t careful.

Annoyingly, many of the packages you need to get started are in pre-release, and much of the documentation is locked into the old style DNX tooling. This article focuses on .net Core 1.1 – Preview 2. If you are already running Preview 4 then much of this will still apply, but you will be working with csproj files instead of project.json. Once Preview 4 gets a proper release, this article will be updated to reflect the (likely) many changes that go along with it.

Let’s get started.

The Basics Using MSTest

We will go over the basics using the MSTest framework. If you prefer to use NUnit or XUnit there are sections below to explain how they run within dotnet test, but still read through this section as it will give you the general gist of how things work.

Create a .net core console project in your solution. Now you are probably used to creating a class library for unit tests or maybe even a “unit test” project. We do not use the “Unit Test” project type as there is no such type for .net core, only the full framework. And we do not use a class library only for the reason that it defaults the target framework to .net standard, not .net core. It’s not that we are going to be building a console application per-say, it’s that the defaults for a console application in terms of project.json are closer to what we need.

Once the project is created, if you are on .net core 1.1 or lower, open up project.json. Delete the entire “buildOptions” node. That’s this node :

You can also go ahead and delete the Program.cs file as we won’t be needing it.

Inside that project, you need to install a couple of packages.

You first need to install the MSTest Framework package from here. At this point in time it is still in Pre-Release so you need the “Pre” flag (At some point this will change).

And now install the actual runner package from here. Again, it’s in Pre-Release so you will need the “Pre” flag.

Now go back to your project.json and add in a “testrunner” node at the top level called “mstest”. All in all, your project.json should look like the following :

Great! Now, for the purpose of this article we will create a simple class called “UnitTests.cs”, and add in a couple of tests. That file looks like the following :

Open a command prompt in your unit test directory. Run “dotnet test”. You should see something like the following :

And you are done! You now have a unit test project within .net core.

Using NUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in your NUnit nuget package by running the following from the package manager console :

Now you need to add the NUnit test runner. Install the following nuget package. Note that at the time of this article, the package is only in pre-release so you need the Pre flag :

Inside your project.json, you need a top level node called “testRunner” with the value of “nunit”. If you’ve followed the steps carefully, you should have a project.json that resembles the following :

For the purpose of this article, I have the following test class :

Open a command prompt inside your project folder and run “dotnet test”. You should see something similar to the following :

Using XUnit With Dotnet Test

Again, follow the steps above to create a console application, but remove the buildOptions node in your project.json.

Add in XUnit Core nuget package to your project. In your package manager console run :

Now add in the XUnit test runner with the following command  in your package manager :

Open up your project.json. Add in a top level node named “testrunner” with the value of “xunit”. All going well, your project.json should resemble something close to the following :

For this article, I have the following test class :

Open up a command prompt in your project folder. Run the command “dotnet test” and you should see something similar to the following :

Microsoft have released a security advisory warning that there is a vulnerability in ASP.net core 1.1 MVC Core package that could allow a Denial Of Service attack. Exactly how to use the vulnerability is not being disclosed by Microsoft at this stage. Understandably so as it seems any .net core app that is on 1.1 will be affected. Note that any ASP.net core version below 1.1 is not affected.

Further info/discussion :

Github Announcement
Redhat Announcement
Reddit Discussion
HN Discussion

The issue is in a package named “Microsoft.AspNetCore.Mvc.Core”, but most people will find that they have a direct reference to the “parent” package named “Microsoft.AspNetCore.Mvc”. Either way, you need to do the below and patch.

How To Patch (project.json)

This is how to fix the vulnerability if you are using project.json (e.g. ASP.net Core 1.1 Preview 2). If you are using csproj (Preview 4), then check the section below.

First open up your project.json file and do a search for “Microsoft.AspNetCore.Mvc*”. So that includes any reference to sub packages such as “Microsoft.AspNetCore.Mvc.Core”. Let’s say you have a project.json similar to the below.

You need to bump the version of the MVC dependency. So change it to 1.1.1 like so :

Open a console inside your project folder and run “dotnet restore” and you should be now on the patched version of the library. Deploy your updated application as soon as possible.

If you cannot find a reference to “Microsoft.AspNetCore.Mvc*” in your project.json it does not mean you are immune.

Open your project.lock.json file, and search for “Microsoft.AspNetCore.Mvc.Core”. If you find it inside this file, it means that a package you are using has a dependency on the package. In this case, you need to manually add the full line to your project.json dependencies.

How To Patch (.csproj)

Patching your csproj file is almost identical, but in lovely XML form.

Search for a reference to “Microsoft.AspNetCore.Mvc*”. It will look something like below :

Bump the version number by 1 :

Open a command prompt in your project folder and run “dotnet restore”. Deploy your updated application as soon as possible.

If you cannot find a reference to “Microsoft.AspNetCore.Mvc*” in your project.json it does not mean you are immune.

You should find a file named project.assets.json in your project folder. Open this and search for “Microsoft.AspNetCore.Mvc.Core”. If you find it, it means that a package you are directly using has a reference itself to the MVC Core package. You need to open up your .csproj, and add in the following line :

Open up a command prompt in your project folder and run “dotnet restore” and you are done. Deploy your updated application as soon as possible.

GZIP is a generic compression method that can be applied to any stream of bits. In terms of how it’s used on the web, it’s easiest thought of a way that makes your files “smaller”. When applying GZIP compression to text files, you can see anywhere from 70-90% savings, on images or other files that are usually already “compressed”, savings are much smaller or in some cases nothing. Other than eliminating unnecessary resource downloads, enabling compression is the next best way to improve the loading speed of your web app.

Enabling At The Code Level (Dynamic)

You can dynamically GZIP text resources on request. This is great for content that may change (Think dynamic html, text, not JS, CSS). The reason it’s not great for static content such as CSS or JS, is that those files will not change between requests, or really even between deployments. There is no advantage to dynamically compressing these each time they are requested.

The code you need to make this work is contained in a nuget package. So you will need to run the following from your package manager console.

In your ConfigureServices method in your startup.cs, you need to add a single line to add the dependencies that GZIP compression is going to need.

Note that there are a couple of options you may wish to use in this call.

The EnableForHttps flag does exactly what it says. It enables GZip compression even over SSL (By default this is switched off). The second turns on (Or off in some cases) GZip of certain mimetypes. The default list at the time of writing is the following (Taken from the Github source here) :

Then in your configure method of startup.cs, you then just need a single line to get going.

Note that the order is very important! While in this case we only have two pieces of middleware that don’t cause issues, other middleware (such as the Static Content middleware), actually send the response back to the user before GZip has occured if it’s first in the list. For safety sake, just keep UseResponseCompression as the first middleware in your list.

To test you have everything up and running just fine, try loading your site and watching the requests go past. You should see the Content-Encoding header come back as “Gzip”. In Chrome it looks like the following :

Enabling At The Code Level (Static)

The issue with enabling GZip compression inside .net core is that if your content is largely static (As in it doesn’t change), then it’s being re-compressed over and over again. When it comes to javascript and CSS, there is really no reason for this to be done on the fly. Instead we can GZip these files beforehand and deploy them “pre” GZip’d.

For this you will need to use a task runner such as Gulp to GZIP your static files in your pipeline. There is a very simple gulp package for this named “gulp-gzip”. Have a search around and get used to how it works. But your gulp file should look something similar to the following :

This is a very simple case, you will need to know a bit more about Gulp to customize it to your needs.

However importantly, you need to map Gzip files to actually return their real result. First install the static file middleware. Run the following from your Package Manager console.

Now in your Configure method in startup.cs, you need to add some code like so :

What does this do? Well first it says that serving static files without running the MVC gamut is A-OK. But next it says, if you come across a file that ends in .js.gz, you can send it out, but instead tell the browser that it’s actually javascript and that it’s just been encoded with gzip. Otherwise it just returns the file as a plain old GZip file that a user could download for example.

Next on your webpage, you actually need to be referencing the .gz file not the .js file. So it should look something like so :

And that’s it, your static content is now compressed!

Enabling At The Server Level

And lastly, you can of course always enable GZip compression at the web server level (IIS, NGinx, Apache etc). For this it’s better to consult the relevant documentation. You may wish to do this as it keeps the configuration out of code (And allows a bit more easier access for configuration on the fly).