Logging in any application is always a contentious issue with many developers rolling their own framework and tacking on third party libraries such as Nlog or Log4net. While these approaches are fine, Microsoft have come in and made logging a first class citizen in ASP.NET Core. What that means is a standardized way of logging that is both simple and easy to get up and running, but also extensible when you have more complicated logging needs.

Store and search your log messages in the cloud

Just a quick note from our sponsor elmah.io. Now that you have set up logging, how do you plan to store and search that data? With elmah.io, all of your log messages are stored in the cloud. With a few lines of code, everything is searchable and you can set up a range of notifications to popular tools like Slack and Microsoft Teams. elmah.io comes with a 21 day, no credit card required trial so there’s no reason not to jump in and have a play.

The Basics

This guide assumes you have a basic project up and running (Even just a hello world app), just to test how logging works. For getting up and running, we are going to use the debug logger. For that, you need to install the following Nuget package :

Install-Package Microsoft.Extensions.Logging.Debug

In your startup.cs file, in the Configure method, you need to add a call to AddDebug on your logger factory. This may already be done for you depending on which code template you are using, but in the end it should end up looking a bit like the following :

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddDebug();

	app.UseMvcWithDefaultRoute();
}

Let’s try this out first. Run your web project and view any URL. Back in Visual Studio you should view the “Output” window. If you can’t see it, go Debug -> Windows -> Output. After viewing any URL you should see something pretty similar to :

Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request starting HTTP/1.1 GET http://localhost:47723/api/home  
Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker:Information: Executing action method LoggingExample.Controllers.HomeController.Get (LoggingExample) with arguments () - ModelState is Valid
Microsoft.AspNetCore.Mvc.Internal.ObjectResultExecutor:Information: Executing ObjectResult, writing value Microsoft.AspNetCore.Mvc.ControllerContext.
Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker:Information: Executed action LoggingExample.Controllers.HomeController.Get (LoggingExample) in 11.8351ms
Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request finished in 22.8292ms 200 text/html; charset=utf-8

The content isn’t too important at this stage, but the import thing is that you are seeing logging in action! Of course it is a bit verbose and over the top, but we will be able to filter these out later.

That sort of automatic logging is great, but what we really want is to be able to log things ourselves. Things like logging an exception or fatal error that someone should look into immediately. Let’s go into our controller. First we need to add a dependency on ILogger in our constructor and then we need to actually log something. In the end it looks a bit like this :

public class HomeController : Controller
{
	private readonly ILogger _logger;

	public HomeController(ILogger<HomeController> logger)
	{
		_logger = logger;
	}

	[HttpGet]
	public IActionResult Get()
	{
		_logger.LogInformation("Homepage was requested");
		try
		{
			throw new Exception("Oops! An Exception is going on!");
		}
		catch (Exception ex)
		{
			_logger.LogError(ex.ToString());
		}

		return Ok("123");
	}
}

Great, now when we go into this action what do we see in the debug window?

LoggingExample.Controllers.HomeController:Information: Homepage was requested
LoggingExample.Controllers.HomeController:Error: System.Exception: Oops! An Exception is going on!
   at LoggingExample.Controllers.HomeController.Get()

You will notice that our log messages are shown with the level that they are logged at, in this case Information and Error. At this point it doesn’t make too much a difference but it will do in our next section.

See how easy that was? No more fiddling around with nuget packages and building abstractions, the framework has done it all for you.

Logging Levels

The thing is, if we rolled this into production (And used a logger that pushes to the database), we are going to get inundated with logging about every single request that’s coming through. It creates noise that means you can’t find the logs you need, and worse, it may even cause a performance issue with that many writes.

An interesting thing about even the Microsoft packaged loggers is that there isn’t necessarily a pattern for defining their configuration. Some prefer full configuration classes where you an edit each and every detail, others prefer a simple LogLevel enum passed in and that’s it.

Since we are using the DebugLogger, we need to change our loggerFactor.AddDebug method a bit.

loggerFactory.AddDebug(LogLevel.Error);

What this says is please only log to the debug window errors we find. In this case we have hardcoded the level, but in a real application we would likely read an appSetting that was easily changeable between development and production environments.

Using the same HomeController that we used in the first section of this article, let’s run our app again. What do we see in the debug window?

LoggingExample.Controllers.HomeController:Error: System.Exception: Oops! An Exception is going on!
   at LoggingExample.Controllers.HomeController.Get()

So we see it logging our error message but not our information message or the system messages about pages being loaded. Great!

Again, It’s important to note that each logger may have their own configuration object that they use to describe logging levels. This can be a huge pain when swapping between loggers, but it’s all usually contained within your startup.cs.

Logging Filters

Most (if not all) loggerFactory extensions accept a lambda that allows you to filter out logs you don’t want, or to add extra restrictions on your logging.

I’ll change the AddDebug call to the following :

loggerFactory.AddDebug((category, loggingLevel) => category.Contains("HomeController"));

In this case, “category” stands for our logging category which in real terms is the full name (Namespace + Class) of where the logger is being used. In this case we have locked it down to HomeController, but we could just as easily lock it to “LoggingExample”, the name of this test app. Or “Controllers”, the name of a folder (And in the namespace) in our example app.

Under this example and running our code again, we see that we log everything from the controller (Information & Error), but we log nothing from the system itself (Page requests etc).

What you will quickly find is that this Lambda will get crazy big and hard to manage. Luckily the LoggerFactory also has an extra method to deal with this situation.

loggerFactory
	.WithFilter(new FilterLoggerSettings
	{
		{"Microsoft", LogLevel.None},
		{"HomeController", LogLevel.Error}

	})
	.AddDebug();

Using this, you can specify what level of logging you want for each category. LogLevel.None specifies that you shouldn’t log anything.

Framework Loggers

ASP.NET Core has a set of inbuilt loggers that you can make use of.

AddConsole
Writes output to a console. Has serious performance issues in production so only to be used in development.

AddDebug
What we have used above! Writes output to the debugger. Again, a dev only tool.

AddEventSource
Only available on Windows, writes events out to an ETL. You generally need to use a third party tool such as PerfView to be able to read the logs.

AddEventLog
Writes out logging to the EventLog (Windows only). Again, pretty nifty for logging errors especially when you have a devops team looking after the boxes, they like seeing issues in the Event Log!

AddAzureWebAppDiagnostic
If you are on Azure, this writes out logs to Azure diagnostics. This is very handy and if you are in Azure, it’s a must do as other types of logging (Such as to a database), may fail because of network issues which are causing the errors in the first place.

Third Party Loggers

NLog
https://github.com/NLog/NLog.Extensions.Logging – NLog is usually my go to for the simple fact that it in turn has many extensions into things like Logentries. Obviously there are other more simple options such as logging to a file or sending an email.

Elmah
https://github.com/elmahio/Elmah.Io.Extensions.Logging – Elmah is mostly popular because of the dashboard that it comes with. If you are already using Elmah, then this is an easy slot in.

Serilog
https://github.com/serilog/serilog-extensions-logging – Serilog is gaining in popularity. Again, another easy extension to get going that has a lot of versatility.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Anyone that has used the full .net MVC framework has spent many an hour trying to rejig the web.config and custom MVC filters to get custom error pages going. Often it would lead you on a wild goose chase around Stack Overflow finding answers that went something along the lines of “just do this one super easy thing and it will work”… It never worked.

.net Core has completely re-invented how custom errors work. Partly because with no web.config, any XML configuration is out the window, and partly because the new “middleware pipeline” mostly does away with the plethora of MVC filters you had to use in the past.

Developer Exception Page

The developer exception page is more or less the error page you used to see in full .net framework if you had custom errors off. That is, you could see the stack trace of the error and other important info to help you debug the issue.

By default, new ASP.net core templates come with this turned on when creating a new project. You can check this by looking at the Configure method of your startup.cs file. It should look pretty close to the following.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}

	app.UseMvcWithDefaultRoute();
}

Note that checking if the environment is development is so important! Just like the CustomErrors tag in the full framework, you only want to leak out your stacktrace and other sensitive info if you are debugging this locally or you specifically need to see what’s going on for testing purposes. Under no circumstances should you just turn on the “UseDeveloperExceptionPage” middleware without first making sure you are working locally (Or some other specific condition).

Another important thing to note is that as always, the ordering of your middleware matters. Ensure that you are adding the Developer Exception Page middleware before you are going into MVC. Without it, MVC (Or any other middleware in your pipeline), can short circuit the process without it ever reaching the Developer Exception page code.

If your code encounters an exception now, you should see something similar to the following :

As we can see, we get the full stack as well as being able to see any query we sent, cookies and other headers. Again, not things we want to start leaking out all over the place.

Exception Handler Page

ASP.net core comes with a catch all middleware that handles all exceptions and will redirect the user to a particular error page. This is pretty similar to the default redirect in the CustomErrors attribute in web.config or the HandleError attribute in full framework MVC. An important note is that this is an “exception” handler. Other status code errors (404 for example) do not get caught and redirected using this middleware.

The pattern is usually to show the developer error page when in the development environment, otherwise to redirect to the error page. So it might look a bit like this :

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}
	else
	{
		app.UseExceptionHandler("/home/error");
	}

	app.UseMvcWithDefaultRoute();
}

You would then need to create an action to handle the error. One very important thing to note is that the ExceptionHandler will be called with the same HTTP Verb as the original request. e.g. If the exception happened on a Post request, then your handler at /home/error should be able to accept Posts.

What this means in practice is that you should not decorate your action with any particular HTTP verb, just allow it to accept them all.

Statuscode Pages

ASP.net core comes with an inbuilt middleware that allows you to capture other types of HTTP status codes (Other than say 500), and be able to show them certain content based on the status code.

There are actually two different ways to get this going. The first is :

app.UseStatusCodePagesWithRedirects("/error/{0}");

Using “StatusCodePagesWithRedirects” you redirect the user to the status page. The issue with this is that the client is returned a 302 and not the original status code (For example a 404). Another issue is that if the exception is somewhere in your pipeline you are essentially restarting the pipeline again (And it could throw the same issue up).

The second option is :

app.UseStatusCodePagesWithReExecute("/error/{0}");

Using ReExecute, your original response code is returned but the content of the response is from your specified handler. This means that 404’s will be treated as such by spiders and browsers, but it still allows you to display some custom content (Such as a nice 404 page for a user).

Custom Middleware

Remember that you can always create custom middleware to handle any exception/status code in your pipeline. Here is an example of a very simple middleware to handle a 404 status code.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.Use(async (context, next) =>
	{
		await next.Invoke();

		//After going down the pipeline check if we 404'd. 
		if (context.Response.StatusCode == StatusCodes.Status404NotFound)
		{
			await context.Response.WriteAsync("Woops! We 404'd");
		}
	});

	app.UseMvcWithDefaultRoute();
}

On our way back out (Remember, the code after the “next” is code to be run on the way out of the pipeline), we check if the status code was 404, if it is then we return a nice little message letting people know we 404’d.

If you want more info on how to create a custom middleware to handle exceptions (Including how to write a nice class to wrap it), check out our tutorial on writing custom middleware in asp.net core.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Yeoman is a code scaffolding tool that generates boilerplate project code for you. If you’ve been developing on Windows for some time, you’re probably using Visual Studio to create your projects, and letting it generate your “base” code. For the most part, if you’re still on Windows and still use Visual Studio, Yeoman may not be for you (But it’s still not totally out of the question!). If you’re on Linux, or prefer building in a minimalistic scenario using something like VS Code, then Yeoman can do a lot of the legwork for you.

Getting Started

The template we are going to use in this tutorial utilizes the bower package manager. If you don’t have this, or ff you don’t know what this is, you are going to have to install it. Run the following from a command prompt/terminal to install Bower.

npm install -g bower

Yeoman is actually an NPM module so if you haven’t already, you will need to install NodeJS from here. Once you’ve got Node up and running, you need to run the following command from a command prompt (Or terminal on non Windows), to install Yeoman globally.

npm install -g yo

Now while Yeoman is a scaffolding tool, the templates it uses to actually generate the code are mostly community driven. You can find all possible generators on this page here. While this tutorial focussesd on using Yeoman with .net core, it can be used to scaffold any codebase in any language. The generator we will be using is called “generator-aspnet”. For this, we need to install the template using NPM. So run the following in a terminal/command prompt.

npm install -g generator-aspnet

Great, now we are ready to go!

Go create a folder for your project now. Open a command prompt/terminal inside this folder and run

yo aspnet

All going well, you should be presented with a screen that is similar to this (Note it can take a while for things to kick into gear, just be patient!)

Select “Web Application” (Not Empty). On the next screen select the Bootstrap framework, type your project name in and let rip. You will see Yeoman go ahead and create your project similar to how Visual Studio would do it, you can then open up your project folder and see this :

As you probably saw, there are a tonne of templates built in to do most of the legwork of project creation for you. Have a play around creating different project types and see what is generated for you. Again, if you are using Visual Studio as your IDE, then most of this won’t be that amazing, but if you are going without, then having something to generate all this code for you is a godsend.

Using Other Templates

As quickly discussed above, this aspnet generator isn’t the only template for .net core in Yeoman. If you had over here, and type in .net core you will find several generators that can be used. Many of which are the basic template but with slight additions such as using Angular 2 instead of 1.6.

One template that is very popular is one by Microsoft called “aspnetcore-spa”. You can find out more info over on the Github page, but in short it’s a code template for creating single page apps.

Creating Your Own Templates

If you are a vendor and find yourself creating (for lack of a better term), “cookie cutter” type websites where there are many similarities, you might actually want to create your own custom template to create consistency. It’s not quite as straight forward as throwing something in a directory and having Yeoman copy/paste it, but it’s definitely worth investigating if you find yourself creating boilerplate code time and time again. You can read more in the official Yeoman docs.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

NancyFX (Or Nancy, FX stands for framework), is a super lightweight web framework that can be used to spin up minimalist API’s in no time at all. It’s mostly seen as an alternative to Web API when you don’t need all the plumbing and boilerplate work of a full Web API project. It also has a lot of popularity with NodeJS developers that are used to very simple DSL’s to build API’s.

Getting Started

When creating a new project, you want to create a .net core web project but make sure it’s created as an “empty” application. If you select Web API (Or MVC) it brings along with it a ton of references and boilerplate code that you won’t need when using Nancy.

You will need the Nancy Nuget package so run the following from your package manager console. At this point in time the .net core able Nancy is in Pre-Release, so at the time of writing, you will need the pre flag when installing from Nuget.

Install-Package Nancy -Pre

In your startup.cs you need to add a call to UseNancy in the Configure method. Note that any other handler in here (Like AddMVC) may screw up your pipeline, especially when it comes to routing. If you are intending to use Nancy it’s probably better to go all in and not try and mix and match frameworks.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	loggerFactory.AddConsole();

	if (env.IsDevelopment())
	{
		app.UseDeveloperExceptionPage();
	}

	app.UseOwin(x => x.UseNancy());
}

If you have issues with the line “UseOwin” then you will need to install the Owin package.

Install-Package Microsoft.AspNetCore.Owin

Instead of creating a class that inherits from “Controller” as you normally would in Web API, create a class that inherits from “NancyModule”. All your work is then actually done in the constructor, which may seem a bit weird at first but when you are building a tiny API it actually becomes nice to keep things to a minimum. Our NancyModule that we will use for the tutorial looks a bit like this

public class HomeModule : NancyModule
{
	public HomeModule()
	{
		Get("/", args => "Hello World");

		Get("/SayHello", args => $"Hello {this.Request.Query["name"]}");

		Get("/SayHello2/{name}", args => $"Hello {args.name}");
	}
}

It should be pretty self-explanatory. But let’s go through each one.

  1. When I go to “/” I will see the text “Hello World”.
  2. When I go to “/SayHello?name=wade” I will see “Hello wade”
  3. When I go to “/SayHello2/wade” I will see “Hello wade”

As you can probably see, Nancy is extremely good for simple API’s such as lookup type queries.

Model Binding

While you can stick with using dynamic objects, model binding is always a nice addition. Below is an example of how to model bind on a POST request.

public class HomeModule : NancyModule
{
	public HomeModule()
	{
		Post("/login", args =>
		{
			var loginModel = this.Bind<Models.LoginModel>();
			return $"Logged In {loginModel.Username}";
		});
	}
}

Using Postman I can send a request to /login as a form encoded request, and I can see I get the correct response back :

Nancy also recognizes different body types. If you send the request as JSON (With the appropriate header of “Content Type:application.json”), Nancy will still be able to bind the model.

Before and After Hooks

Before and after hooks are sort of like MVC filters whereby you can intercept the request before and after your Nancy Module has run. Note that this is still part of the .net core middleware pipeline so you can still add custom .net core middleware to your application along side Nancy hooks.

An important thing with hooks is that they are per module. They live within the module and are only run on requests routed to that module, any requests that land on other modules don’t run the before hook. The hook itself tends to point to a method elsewhere, so in the end it works out a bit like a controller attribute.

There are two ways a hook can handle a request. Here’s one way :

public HomeModule()
{
	Before += context =>
	{
		//Do something with the context here. 
		return null;
	};

	Get("/", args => "Hello World!");
}

If a hook returns null as above, code execution is passed down to the matching route. For example, if you are checking the existence of a particular header (API key or the like), and it’s there, then returning null simply allows execution to continue. Compare this to the following :

Before += context =>
{
	if (!context.Request.Headers.Keys.Contains("x-api-key"))
	{
		return Response.AsText("Request Denied");
	}
	else
	{
		return null;
	}
};

In this example, if our header collection doesn’t contain our API key (Weak security I know!), we return a response of “Request Denied” and execution ends right there. If the header is there, we continue execution.

View Engine

While Nancy predominately is used for lightweight API’s, there is an inbuilt view engine and you can even get an addon that allows for Razor-like syntax – In fact it’s pretty much identical to Razor. The actual package can be found here. Unfortunately, at this time the view engine has a dependency on the full .net framework so cannot be used in a web application targeting .net core.

Static Content

Nancy by default won’t serve static content. It does have its own way of dealing with static content and various conventions, but I actually prefer to stick with the standard ASP.net core middleware for static files. This way it short-circuits Nancy completely and avoids going down the Nancy pipeline. For that you can install the following nuget package :

Install-Package Microsoft.AspNetCore.StaticFiles

In the Startup.cs class in the Configure method, add a call to UseStaticFiles like so :

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.UseStaticFiles();
	app.UseOwin(x => x.UseNancy());
}

Place all your static content in the wwwroot (CSS, Images, Javascript etc), and you will now be able to have these files served outside the Nancy pipeline.

Everything Else

Nancy has it’s own documentation here that you can take a read through. Some of it is pretty verbose, and you also have to remember that not all of it is applicable to .net core. Like the static content section above, there may be an existing solution with .net core that works better out of the box or is easier to configure. As always, drop a comment below to show off your new project you built in Nancy!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

You’ve just installed Visual Studio 2017 (Or the very latest .net core SDK), and now when you try to compile an old .net core project you had laying around, you are getting an error similar to the following :

.vs\restore.dg(1,1): error MSB4025: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1.

The issue is actually very very similar to a previous error you would have occured when you installed Visual Studio 2017 RC over the top of Visual Studio 2015. You can read about that issue here.

The fix is very similar, first go to “C:\Program Files\dotnet\sdk” on your machine. You should see a list of SDK versions. The version you are likely looking for is “1.0.0-preview2-003131” if you were previously opening this project in Visual Studio 2015. If you have 2 versions only, then the version you are looking for is the one that is *not* labelled 1.0.0. If you have only 1.0.0, then you probably got this project from someone else so will have to liase with them what version of the SDK you should be installing. Install that SDK and then come back to read on!

In the root of your solution for your project, create a file named “global.json”. Inside that file, place the following contents :

{
  "sdk": { "version": "1.0.0-preview2-003131" }
}

Where the version is the SDK version from our previous step.

Ensure Visual Studio is closed and now open a command prompt in your solution directory of your project. Run the command “dotnet restore”. You should see a few packages whiz past. After this open your solution again (In Visual Studio 2015), and you should now be able to build successfully!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Uploading files in ASP.net core is largely the same as standard full framework MVC, with the large exception being how you can now stream large files. We will go over both methods of uploading a file in ASP.net core.

Model Binding IFormFile (Small Files)

When uploading a file via this method, the important thing to note is that your files are uploaded in their entirety before execution hits your controller action. What this means is that the disk on your server holds a temporary file while you decide where to push it. With this small files this is fine, larger files you run into issues of scale. If you have many users all uploading large files you are liable to run out of ram (where the file is stored before moving it to disk), or disk space itself.

For your HTML, it should look something like this :

<form method="post" enctype="multipart/form-data" action="/Upload">
    <div>
        <p>Upload one or more files using this form:</p>
        <input type="file" name="files" />
    </div>
    <div>
         <input type="submit" value="Upload" />
    </div>
</form>

The biggest thing to note is that the the encoding type is set to “multipart/form-data”, if this is not set then you will go crazy trying to hunt down why your file is showing up in your controller.

Your controller action is actually very simple. It will look something like :

[HttpPost]
public IActionResult Index(List<IFormFile> files)
{
	//Do something with the files here. 
	return Ok();
}

Note that the name of the parameter “files” should match the name on the input in HTML.

Other than that, you are there and done. There is nothing more than you need to do.

Streaming Files (Large Files)

Instead of buffering the file in its entirety, you can stream the file upload. This does introduce challenges as you can no longer use the built in model binding of ASP.NET core. Various tutorials out there show you how to get things working with massive pieces of code, but I’ll give you a helper class that should alleviate most of the work. Most of this work is taken from Microsoft’s tutorial on file uploads here. Unfortunately it’s a bit all over the place with helper classes that you need to dig around the web for.

First, take this helper class and stick it in your project. This code is taken from a Microsoft project here.

public static class MultipartRequestHelper
{
    // Content-Type: multipart/form-data; boundary="----WebKitFormBoundarymx2fSWqWSd0OxQqq"
    // The spec says 70 characters is a reasonable limit.
    public static string GetBoundary(MediaTypeHeaderValue contentType, int lengthLimit)
    {
        //var boundary = Microsoft.Net.Http.Headers.HeaderUtilities.RemoveQuotes(contentType.Boundary);// .NET Core <2.0
        var boundary = Microsoft.Net.Http.Headers.HeaderUtilities.RemoveQuotes(contentType.Boundary).Value; //.NET Core 2.0
        if (string.IsNullOrWhiteSpace(boundary))
        {
            throw new InvalidDataException("Missing content-type boundary.");
        }

        if (boundary.Length > lengthLimit)
        {
            throw new InvalidDataException(
                $"Multipart boundary length limit {lengthLimit} exceeded.");
        }

        return boundary;
    }

    public static bool IsMultipartContentType(string contentType)
    {
        return !string.IsNullOrEmpty(contentType)
                && contentType.IndexOf("multipart/", StringComparison.OrdinalIgnoreCase) >= 0;
    }

    public static bool HasFormDataContentDisposition(ContentDispositionHeaderValue contentDisposition)
    {
        // Content-Disposition: form-data; name="key";
        return contentDisposition != null
                && contentDisposition.DispositionType.Equals("form-data")
                && string.IsNullOrEmpty(contentDisposition.FileName.Value) // For .NET Core <2.0 remove ".Value"
                && string.IsNullOrEmpty(contentDisposition.FileNameStar.Value); // For .NET Core <2.0 remove ".Value"
        }

    public static bool HasFileContentDisposition(ContentDispositionHeaderValue contentDisposition)
    {
        // Content-Disposition: form-data; name="myfile1"; filename="Misc 002.jpg"
        return contentDisposition != null
                && contentDisposition.DispositionType.Equals("form-data")
                && (!string.IsNullOrEmpty(contentDisposition.FileName.Value) // For .NET Core <2.0 remove ".Value"
                    || !string.IsNullOrEmpty(contentDisposition.FileNameStar.Value)); // For .NET Core <2.0 remove ".Value"
    }
}

Next, you can need this extension class which again is taken from Microsoft code but moved into a helper so that it can be reused and it allows your controllers to be slightly cleaner. It takes an input stream which is where your file will be written to. This is kept as a basic stream because the stream can really come from anywhere. It could be a file on the local server, or a stream to AWS/Azure etc

public static class FileStreamingHelper
{
	private static readonly FormOptions _defaultFormOptions = new FormOptions();

	public static async Task<FormValueProvider> StreamFile(this HttpRequest request, Stream targetStream)
	{
		if (!MultipartRequestHelper.IsMultipartContentType(request.ContentType))
		{
			throw new Exception($"Expected a multipart request, but got {request.ContentType}");
		}

		// Used to accumulate all the form url encoded key value pairs in the 
		// request.
		var formAccumulator = new KeyValueAccumulator();
		string targetFilePath = null;

		var boundary = MultipartRequestHelper.GetBoundary(
			MediaTypeHeaderValue.Parse(request.ContentType),
			_defaultFormOptions.MultipartBoundaryLengthLimit);
		var reader = new MultipartReader(boundary, request.Body);

		var section = await reader.ReadNextSectionAsync();
		while (section != null)
		{
			ContentDispositionHeaderValue contentDisposition;
			var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(section.ContentDisposition, out contentDisposition);

			if (hasContentDispositionHeader)
			{
				if (MultipartRequestHelper.HasFileContentDisposition(contentDisposition))
				{
					await section.Body.CopyToAsync(targetStream);
				}
				else if (MultipartRequestHelper.HasFormDataContentDisposition(contentDisposition))
				{
					// Content-Disposition: form-data; name="key"
					//
					// value

					// Do not limit the key name length here because the 
					// multipart headers length limit is already in effect.
					var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name);
					var encoding = GetEncoding(section);
					using (var streamReader = new StreamReader(
						section.Body,
						encoding,
						detectEncodingFromByteOrderMarks: true,
						bufferSize: 1024,
						leaveOpen: true))
					{
						// The value length limit is enforced by MultipartBodyLengthLimit
						var value = await streamReader.ReadToEndAsync();
						if (String.Equals(value, "undefined", StringComparison.OrdinalIgnoreCase))
						{
							value = String.Empty;
						}
						formAccumulator.Append(key.Value, value); // For .NET Core <2.0 remove ".Value" from key

						if (formAccumulator.ValueCount > _defaultFormOptions.ValueCountLimit)
						{
							throw new InvalidDataException($"Form key count limit {_defaultFormOptions.ValueCountLimit} exceeded.");
						}
					}
				}
			}

			// Drains any remaining section body that has not been consumed and
			// reads the headers for the next section.
			section = await reader.ReadNextSectionAsync();
		}

		// Bind form data to a model
		var formValueProvider = new FormValueProvider(
			BindingSource.Form,
			new FormCollection(formAccumulator.GetResults()),
			CultureInfo.CurrentCulture);

		return formValueProvider;
	}

	private static Encoding GetEncoding(MultipartSection section)
	{
		MediaTypeHeaderValue mediaType;
		var hasMediaTypeHeader = MediaTypeHeaderValue.TryParse(section.ContentType, out mediaType);
		// UTF-7 is insecure and should not be honored. UTF-8 will succeed in 
		// most cases.
		if (!hasMediaTypeHeader || Encoding.UTF7.Equals(mediaType.Encoding))
		{
			return Encoding.UTF8;
		}
		return mediaType.Encoding;
	}
}

Now, you actually need to create a custom action attribute that completely disables form binding. This is important otherwise C# will still try and load the contents of the request regardless. The attribute looks like :

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class DisableFormValueModelBindingAttribute : Attribute, IResourceFilter
{
    public void OnResourceExecuting(ResourceExecutingContext context)
    {
        var formValueProviderFactory = context.ValueProviderFactories
            .OfType<FormValueProviderFactory>()
            .FirstOrDefault();
        if (formValueProviderFactory != null)
        {
            context.ValueProviderFactories.Remove(formValueProviderFactory);
        }

        var jqueryFormValueProviderFactory = context.ValueProviderFactories
            .OfType<JQueryFormValueProviderFactory>()
            .FirstOrDefault();
        if (jqueryFormValueProviderFactory != null)
        {
            context.ValueProviderFactories.Remove(jqueryFormValueProviderFactory);
        }
    }

    public void OnResourceExecuted(ResourceExecutedContext context)
    {
    }
}

Next, your controller should look similar to the following. We create a stream and pass it into the StreamFile extension method. The output is our FormValueProvider which we use to bind out model manually after the file streaming. Remember to put your custom attribute on the action to force the request to not bind.

[HttpPost]
[DisableFormValueModelBinding]
public async Task<IActionResult> Index()
{
	FormValueProvider formModel;
	using (var stream = System.IO.File.Create("c:\\temp\\myfile.temp"))
	{
		formModel = await Request.StreamFile(stream);
	}

	var viewModel = new MyViewModel();

	var bindingSuccessful = await TryUpdateModelAsync(viewModel, prefix: "",
	   valueProvider: formModel);

	if (!bindingSuccessful)
	{
		if (!ModelState.IsValid)
		{
			return BadRequest(ModelState);
		}
	}

	return Ok(viewModel);
}

In my particular example I have created a stream that writes to a temp file, but obviously you can do anything you want with it. The model I am using is the following :

public class MyViewModel
{
	public string Username { get; set; }
}

Again, nothing special. It’s just showing you how to bind a view model even when you are streaming files.

My HTML form is pretty standard :

<form method="post" enctype="multipart/form-data" action="/Upload">
    <div>
        <p>Upload one or more files using this form:</p>
        <input type="file" name="files" multiple />
    </div>
    <div>
        <p>Your Username</p>
        <input type="text" name="username" />
    </div>
    <div>
         <input type="submit" value="Upload" />
    </div>
</form>

And that’s all there is to it. Now when I upload a file, it is streamed straight to my target stream which could be an upload stream to AWS/Azure or any other cloud provider, and I still managed to get my view model out too. The biggest downside is ofcourse that the viewmodel details are not available until the file has been streamed. What this means is that if there is something in the viewmodel that would normally determine where the file goes, or the name etc, this is not available to you without a bit of rejigging (But it’s definitely doable).

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Middleware is the new “pipeline” for requests in asp.net core. Each piece of middleware can process part or all of the request, and then either choose to return the result or pass on down to the next piece of middleware. In the full ASP.net Framework you were able to specify “HTTP Modules” that acted somewhat like a pipeline, but it was hard to really see how the pieces fit together at times.

Anywhere you would normally write an HTTP Module in the full ASP.net Framework is where you should probably now be using middleware. Infact for most places you would normally use MVC filters, you will likely find it easier or more convenient to use middleware.

This diagram from Microsoft does a better job than I could at seeing how pipelines work. Do notice however that you can do work both before and after the pass down to the next middleware. This is important if you wish to affect results as they are going out rather than results that are coming in.

Basic Configuration

The easiest way to get started with middleware is in the Configure method of your startup.cs file. In here is where you “chain” your pipeline together. It may end up looking something like the following :

public void Configure(IApplicationBuilder app)
{
    app.UseStaticFiles();  
    app.UseMvcWithDefaultRoute();     
}

Now an important thing to remember is that ordering is important. In this pipeline the static files middleware runs first, this middleware can choose to pass on the request (e.g. not process the response in its entirety) or it can choose to push a response to the client and not call the next middleware in the chain. That last part is important because it will often trip you up if your ordering is not correct (e.g. if you want to authenticate/authorize someone before hitting your MVC action).

App.Use

App.Use is going to be the most common pipeline building block you will come across. It allows you to add on something to the response and then pass to the next middleware in the pipeline, or you can force a short circuit and force a return result without passing to the next handler.

To illustrate, consider the following

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.Use((context, next) =>
	{
		//Do some work here
		context.Response.Headers.Add("X-Content-Type-Options", "nosniff");
		//Pass the request on down to the next pipeline (Which is the MVC middleware)
		return next();
	});

	app.Use(async (context, next) =>
	{
		await context.Response.WriteAsync("Hello World!");
	});

	app.Use((context, next) =>
	{
		context.Response.Headers.Add("X-Xss-Protection", "1");
		return next();
	});
}

In this example, the pipeline adds a response header of X-Content-Type-Options and then passes it to the next handler. The next handler adds a response text of “Hello World!” then the pipeline ends. In this example because we are not calling Next() we actually don’t pass to the next handler at all and execution finishes. In this way we can short circuit the pipeline if something hasn’t met criteria. For example if some authorization key is not present.

App.Run

You may see App.Run appear around the place, it’s like App.Use’s little brother. App.Run is an “end of the line” middleware. It can handle generating a response, but it doesn’t have the ability to pass it down the chain. For this reason you will see App.Run middleware at the end of the pipeline and no where else.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.Use((context, next) =>
	{
		//Do some work here
		context.Response.Headers.Add("X-Content-Type-Options", "nosniff");
		//Pass the request on down to the next pipeline (Which is the MVC middleware)
		return next();
	});

	app.Use((context, next) =>
	{
		context.Response.Headers.Add("X-Xss-Protection", "1");
		return next();
	});

	app.Run(async (context) =>
	{
		await context.Response.WriteAsync("Hello World!");
	});
}

App.Map

App.Map is used when you want to build a mini pipeline only for a certain URL. Note that the URL given is used as a “starts with”. So commonly you might see this in middleware for locking down an admin area. The usage is pretty simple :

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.Map("/helloworld", mapApp =>
	{
		mapApp.Run(async context =>
		{
			await context.Response.WriteAsync("Hello World!");
		});
	});
}

So in this example when someone goes to /helloworld, they will be given the Hello World! message back.

App.MapWhen

This is similar to Map, but allows for much more complex conditionals. This is great if you are checking for a cookie or a particular query string, or need more powerful Regex matching on URLs. In the example below, we are checking for a query string of “helloworld”, and if it’s found we return the response of Hello World!

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.MapWhen(context => context.Request.Query.ContainsKey("helloworld"), mapApp =>
	{
		mapApp.Run(async context =>
		{
			await context.Response.WriteAsync("Hello World!");
		});
	});
}

Building A Middleware Class

And after all that we going to toss it in the bin and create a middleware class! Middleware classes use quite a different syntax to all of the above, and really it just groups them all together in a simple pattern.

Let’s say I want to build a class that will add common security headers to my site. If I right click inside my .net core web project and select “Add Item”, there is actually an option to add a middleware class template.

If you can’t find this, don’t fear because there is absolutely nothing about using the template. It just provides the basic structure of how your class should look, but you can copy and paste it from here into a plain old .cs file and it will work fine. Our security middleware looks something like the following :

public class SecurityMiddleware
{
	private readonly RequestDelegate _next;

	public SecurityMiddleware(RequestDelegate next)
	{
		_next = next;
	}

	public Task Invoke(HttpContext httpContext)
	{
		httpContext.Response.Headers.Add("X-Xss-Protection", "1");
		httpContext.Response.Headers.Add("X-Frame-Options", "SAMEORIGIN");
		httpContext.Response.Headers.Add("X-Content-Type-Options", "nosniff");
		return _next(httpContext);
	}
}

public static class SecurityMiddlewareExtensions
{
	public static IApplicationBuilder UseSecurityMiddleware(this IApplicationBuilder builder)
	{
		return builder.UseMiddleware<SecurityMiddleware>();
	}
}

Very simple. In our invoke method we do all the work we want to do and pass it onto the next handler. Again, we can short circuit this process by returning an empty Task instead of calling next.

Another thing to note is that this is instantiated using the service container. That means if you have a need to access a database or call in settings from an external source, you can simply add the dependency in the constructor.

Important : App middleware is constructed at startup. If you require scoped dependencies or anything other than singletons, do not inject your dependencies into the constructor. Instead you can add dependencies into the Invoke method and .net core will work it out. Again, if you need scoped dependencies add these to your Invoke method not your constructor.

The Extensions part is optional, but it does allow you to write code like this :

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
	app.UseMiddleware<SecurityMiddleware>(); //If I didn't have the extension method
	app.UseSecurityMiddleware(); //Nifty encapsulation with the extension
}

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Straight into the important links.

Download here
Release Notes here

There are a tonne of blogs out there talking about the release, so let’s just jump straight into the 5 features that I think developers will be most excited about. There is a tonne of new stuff coming out, but I think these are the features that will impact developers lives on an almost daily basis.

Inbuilt Unit Testing On The Fly

There has been addons for years that run unit tests as you type and let you know if your unit tests are falling apart, well now Microsoft has added the support to Visual Studio natively! Again Microsoft have outdone themselves by supporting not only MSTest, but XUnit and NUnit are also supported.

Run To Click

A new feature in the debugger, when stopped at a breakpoint in your code, you can now continue execution to a “point” where your mouse clicks rather than having to place a breakpoint every few lines and step through.

Redgate SQL Search

Built into every version of Visual Studio (Including the free Community version), is Redgate’s SQL Search product. How many times have you wanted to know whether a column is referenced anywhere, and you are faced with the daunting task of searching through a massive lump of Stored Procedures? Redgate SQL search takes care of this by allowing you to do a text search across every database object. It really is an awesome product and it’s great that it’s now part of Visual Studio.

Intellisense Filter

Intellisense now has a tray that allows you to filter the member list by type. This means that when you are working with an unfamiliar library (Or a new code base), and you know that “there should be a method that does XYZ”, you can now filter to only methods and ignore all properties. This feature alone makes upgrading to Visual Studio 2017 worth it. Note that the feature is not enabled by default, to do so : go to Tools > Options > Text Editor > [C# / Basic] > IntelliSense and check the options for filtering and highlighting.

 

Use Of Git.exe

Up until Visual Studio 2017, the Git implementation in VS was built using Github’s libgit2. For most people this was fine, but there were a few features available to you on the command line that weren’t available to you inside Visual Studio. Most notably, the use of SSH keys. While most people who already use Visual Studio’s GIT tools are probably happy with existing functionality, it’s always good to have the tools on parity with what’s already out there.

What Else?

Something else you are excited about? Drop a comment below and let us know!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post, we talked about how to use a Redis Cache in .net Core. In most large scale scenarios, Redis is going to be your goto. But for tiny sites that have a single web instance, or for sites that really only need a local cache, InMemory caching is much easier to get setup with and obviously does away with wrangling a Redis server.

Interestingly, .net Core currently offers two ways to implement a local in memory cache. We’ll take a look at both.

IMemoryCache

The first option is to use what is simply known in .net core as IMemoryCache. It’s similar to what you may have used in standard ASP.net in terms of storing an object in memory by a key.

First open up your startup.cs. In your ConfigureServices method you need to add a call to “AddMemoryCache” like so :

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.AddMemoryCache();
}

In your controller or class you wish to use the memory cache, add in a dependency into the constructor. The two main methods you will likely be interested in are “TryGetValue” and “Set”. Both should be rather explanatory in the following code :

public class HomeController : Controller
{
	private readonly IMemoryCache _memoryCache;

	public HomeController(IMemoryCache memoryCache)
	{
		_memoryCache = memoryCache;
	}

	[HttpGet]
	public string Get()
	{
		var cacheKey = "TheTime";
		DateTime existingTime;
		if (_memoryCache.TryGetValue(cacheKey, out existingTime))
		{
			return "Fetched from cache : " + existingTime.ToString();
		}
		else
		{
			existingTime = DateTime.UtcNow;
			_memoryCache.Set(cacheKey, existingTime);
			return "Added to cache : " + existingTime;
		}
	}
}

That’s the basics, now a couple of nice things that this implementation of a memory cache has that won’t be available in the next implementation I will show you

PostEvictionCallback
An interesting feature is the PostEvictionCallback delegate. This allows you to register an action to be called everytime something “expires”. To use it, it will look something like the following :

_memoryCache.Set(cacheKey, existingTime, 
	new MemoryCacheEntryOptions()
		.RegisterPostEvictionCallback((key, value, reason, state) => {  /*Do Something Here */ })
 );

Now everytime a cache entry expires, you will be notified about it. Usually there are very limited reasons why you would want to do this, but the option is there should you want it!

CancellationToken
Also supported is CancellationTokens. Again, a feature that probably won’t be used too often, but can be used to invalidate a set of cache all in one go. CancellationTokens are notoriously difficult to debug and get going, but if you have the need it’s there!

The code would look something similar to this :

var cts = new CancellationTokenSource();
_memoryCache.Set(cacheKey, existingTime, 
	new MemoryCacheEntryOptions()
		.AddExpirationToken(new CancellationChangeToken(cts.Token))
 );

Distributed Memory Cache

A “distributed” memory cache is probably a bit of an oxymoron. It’s obviously not distributed if it’s sitting local to a machine. But the big advantage to going down this road is that should to intend to switch to using Redis in the future, the interfaces between the RedisDistributedCache and the In Memory one are exactly the same. It’s just a single line of difference in your startup. This may also be helpful if locally you just want to use your machines cache and not have Redis setup.

To get going, in your startup.cs ConfigureServices method, add a call to AddDistributedMemoryCache like so :

public void ConfigureServices(IServiceCollection services)
{
	services.AddMvc();
	services.AddDistributedMemoryCache();
}

Your controller or class where you inject it should look a bit like the following. Again, this code is actually taken directly from our Redis Cache tutorial, the implementation is exactly the same for InMemory, it’s only the call in ConfigureServices in your startup.cs that changes.

public class DistributedController : Controller
{
	private readonly IDistributedCache _distributedCache;

	public DistributedController(IDistributedCache distributedCache)
	{
		_distributedCache = distributedCache;
	}

	[HttpGet]
	public async Task<string> Get()
	{
		var cacheKey = "TheTime";
		var existingTime = _distributedCache.GetString(cacheKey);
		if (!string.IsNullOrEmpty(existingTime))
		{
			return "Fetched from cache : " + existingTime;
		}
		else
		{
			existingTime = DateTime.UtcNow.ToString();
			_distributedCache.SetString(cacheKey, existingTime);
			return "Added to cache : " + existingTime;
		}
	}
}

Now, looking at what we said about IMemoryCache above, PostEvictionCallback and CancellationTokens cannot be used here. This makes sense because this interface for the most part is supposed to be used with distributed environments, any machine in the environment (Or the cache itself) could expire/remove a cache entry.

Another very important difference is that while IMemoryCache accepts C# “objects” into the cache, a distributed cache does not. A distributed cache can only accept byte arrays or strings. For the most part this isn’t going to be a big deal. If you are trying to store large objects just run them through a JSON serializer first and store them as a string, when you pull them out deserialize them into your object.

Which Should You Pick?

You are likely going to write an abstraction layer on top of either caching interface meaning that your controllers/services aren’t going to see much of it. For what you should use in that abstraction, I tend to go with the Distributed Cache for no other reason that should I ever want to move to using Redis, I have the option there.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Are you getting the error as seen in the title when creating a new project in Visual Studio 2015?

The following error occured attempting to run the project model server process (1.0.0-rc4-004771).

Unable to start the process. No executable found matching command “dotnet-projectmodel-server”

And in image form :

Chances are this scenario has played out in the past day or so.

  • You have previously used .net core tooling using project.json in Visual Studio 2015
  • You installed the latest .net core tooling (RC4 or higher) that uses csproj files
  • OR you installed Visual Studio 2017
  • You are now using Visual Studio 2015 to create a new project

There are two ways to “fix” this.

The “I Don’t Really Care” Method

If the difference between .csproj and project.json tooling is beyond you and you really don’t care. Just use Visual Studio 2017 from now on to create .net core projects. Yes that seems very silly, but it is the much easier approach and for many it’s not going to make too much of a difference. You can still open existing project.json .net core projects in Visual Studio 2015 (Provided they have a global.json. see below), just not create new ones.

The “I Want To Actually Fix This” Method

So the reason this error occurs is that Visual Studio 2015 is not able to use the very latest .net core tooling (Currently RC4). When you create a new project *or* when you open an existing project without a global.json file specifying an older tooling version, .net core tries to use the latest tooling SDK installed on your machine.

I highly recommend reading this article on running two versions of the .net core tooling SDK side by side to get a better idea of how this works. It will allow you to fix these issues in the future. 

The fix to actually getting your project to open is by creating a global.json file in the root of your solution with the following contents.

{
	"sdk": {
		"version": "1.0.0-preview2-003131"
	}
}

Where the SDK version is your previous SDK version you were running. Again, if you are unsure about this please read this article on running two versions of the .net core SDK tooling. It’s actually super surprising how many issues can be resolved by knowing how the SDK versioning works.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.