I’ve recently been playing around with all of the new features packaged into C# 7.2. One such feature that piqued my interest because of it’s simplicity was the “in” keyword. It’s one of those things that you can get away with never using in your day to day work, but makes complete sense when looking at language design from a high level.

In simple terms, the in  keyword specifies that you are passing a parameter by reference, but also that you will not modify the value inside a method. For example :

In this example we pass in two parameters essentially by reference, but we also specify that they will not be modified within the method. If we try and do something like the following :

We get a compile time error:

Accessing C# 7.2 Features

I think I should just quickly note how to actually access C# 7.2 features as they may not be immediately available on your machine. The first step is to ensure that Visual Studio is up to date on your machine. If you are able to compile C# 7.2 code, but intellisense is acting up and not recognizing new features, 99% of the time you just need to update Visual Studio.

Once updated, inside the csproj of your project, add in the following :

And you are done!

Performance

When passing value types into a method, normally this would be copied to a new memory location and you would have a clone of the value passed into a method. When using the in  keyword, you will be passing the same reference into a method instead of having to create a copy. While the performance benefit may be small in simple business applications, in a tight loop this could easily add up.

But just how much performance gain are we going to see? I could take some really smart people’s word for it, or I could do a little bit of benchmarking. I’m going to use BenchmarkDotNet (Guide here) to compare performance when passing a value type into a method normally, or as an in  parameter.

The benchmarking code is :

And the results :

We can definitely see the speed difference here. This makes sense because really all we are doing is passing in a variable and doing nothing else. It has to be said that I can’t see the in  keyword being used to optimize code in everyday business applications, but there is definitely something there for time critical applications such as large scale number crunchers.

Explicit In Design

While the performance benefits are OK something else that comes to mind is that when you use in  is that you are being explicit in your design. By that I mean that you are laying out exactly how you intend the application to function. “If I pass this a variable into this method, I don’t expect the variable to change”. I can see this being a bigger benefit to large business applications than any small performance gain.

A way to look at it is how we use things like private  and readonly . Our code will generally work if we just make everything public  and move on, but it’s not seen as “good” programming habits. We use things like readonly  to explicitly say how we expect things to run (We don’t expect this variable to be modified outside of the constructor etc). And I can definitely see in  being used in a similar sort of way.

Comparisons To “ref” (and “out”)

A comparison could be made to the ref keyword in C# (And possibly to a lesser extend the out keyword). The main differences are :

in  – Passes a variable in to a method by reference. Cannot be set inside the method.
ref  – Passes a variable into a method by reference. Can be set/changed inside the method.
out  – Only used for output from a method. Can (and must) be set inside the method.

So it certainly looks like the ref keyword is almost the same as in, except that it allows a variable to change it’s value. But to check that, let’s run our performance test from earlier but instead add in a ref scenario.

So it’s close when it comes to performance benchmarking. However again, in  is still more explicit than ref because it tells the developer that while it’s allowing a variable to be passed in by reference, it’s not going to change the value at all.

Important Performance Notes For In

While writing the performance tests for this post, I kept running into instances where using in  gave absolutely no performance benefit whatsoever compared to passing by value. I was pulling my hair out trying to understand exactly what was going on.

It wasn’t until I took a step back and thought about how in  could work under the hood, coupled with a stackoverflow question or two later that I had the nut cracked. Consider the following code :

What will the output be? If you guessed 1. You would be correct! So what’s going on here? We definitely told our struct to set it’s value to 1, then we passed it by reference via the in keyword to another method, and that told the struct to update it’s value to 5. And everything compiled and ran happily so we should be good right? We should see the output as 5 surely?

The problem is, C# has no way of knowing when it calls a method (or a getter) on a struct whether that will also modify the values/state of it. What it instead does is create what’s called a “defensive copy”. When you run a method/getter on a struct, it creates a clone of the struct that was passed in and runs the method instead on the clone. This means that the original copy stays exactly the same as it was passed in, and the caller can still count on the value it passed in not being modified.

Now where this creates a bit of a jam is that if we are cloning the struct (Even as a defensive copy), then the performance gain is lost. We still get the design time niceness of knowing that what we pass in won’t be modified, but at the end of the day we may aswell passed in by value when it comes to performance. You’ll see in my tests, I used plain old variables to avoid this issue. If you are not using structs at all and instead using plain value types, you avoid this issue altogether.

To me I think this could crop up in the future as a bit of a problem. A method may inadvertently run a method that modifies the structs state, and now it’s running off “different” data than what the caller is expecting. I also question how this will work in multi-threaded scenarios. What if the caller goes away and modified the struct, expecting the method to get the updated value, but it’s created a defensive clone? Plenty to ponder (And code out to test in the future).

Summary

So will there be a huge awakening of using the in keyword in C#? I’m not sure. When Expression-bodied Members (EBD) came along I didn’t think too much of them but they are used in almost every piece of C# code these days. But unlike EBD, the in  keyword doesn’t save typing or having to type cruft, so it could just be resigned to one of those “best practices” lists. Most of all I’m interested in how in  transforms over future versions of C# (Because I really think it has to).   What do you think?

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed


Packaging up your nuget package and pushing it to your feed is surprisingly simple. The actual push may vary depending on whether you are using a hosted service such as MyGet/VSTS or using your own server. If you’ve gone with a third party service then it may pay to read into documentation because there may be additional security requirements when adding to your feed. Let’s get started!

Packaging

If you have gone with building your library in .NET Standard (Which you should), then the entire packaging process comes down to a single command. Inside your project folder, run  dotnet pack and that’s it! The pack command will restore package dependencies, build your project, and package it into a .nupkg.

You should see something along the lines of the following :

While in the previous article in this series, we talked about managing the version of your nuget package through a props file, you can actually override this version number on the command line. It looks a bit like this  :

I’m not a huge fan of this as it means the version will be managed by your build system which can sometimes be a bit hard to wrangle, but it’s definitely an option if you prefer it.

An issue I ran into early was that many tutorials talk about using nuget.exe from the command line to package things up. The dotnet pack command is actually supposed to run the same nuget command under the hood, but I continually got the following exception :

After a quick google it seemed like I wasn’t the only one with this issue. Switching to the dotnet pack command seemed to resolve it.

Publishing

Publishing might depend on whether you ended up going with a third party hosted nuget service or the official nuget server. A third party service might have it’s own way of publishing packages to a feed (All the way to manually uploading them), but many will still allow the ability to do a “nuget push” command.

The push command looks like the following :

Again this command is supposed to map directly to the underlying nuget.exe command (And it actually worked for me), but it’s better to just use the dotnet command instead.

If you are using a build server like VSTS, Team City, or Jenkins, they likely have a step for publishing a nuget package that means you don’t have to worry about the format of the push command.

Working With VSTS

For my actual nuget server, I ended up going with Microsoft VSTS as the company I am working with already had their builds/releases and source control using this service. At first i was kinda apprehensive because other VSTS services I’ve used in the past have been pretty half assed. So color me completely surprised when their package hosting service was super easy to use and had some really nifty features.

Once a feed has been created in VSTS, publishing to it is as easy as selecting it from a drop down list in your build.

After publishing packages, you can then set them to be prerelease or general release too. In a small dev shop this may seem overkill, but in large distributed teams being able to manage this release process in a more granular way is pretty nifty.

You can even unlist packages (So that they can’t be seen in the feed, but they aren’t completely deleted) see a complete history of publishes to the feed, and how many times the package has been downloaded. Without sounding like a complete shill, Microsoft definitely got this addon right.

Summary

So that’s it. From setting up the server (Or hosted service as it turned out), developing the packages (Including wrangling version numbers), and putting everything together with our build server, it’s been surprisingly easy to get a nuget server up and running. If you’ve followed along and setup your own server, or you’ve already got one up and running (Possibly on a third party service), drop a comment and let me know how it’s working out!

I recently had the opportunity to log a bug against Entity Framework Core on Github with the Microsoft team. Before I sent it through, I took a look at other issues listed against the project to see the best way to describe my problem in such a public forum. While some reports were really detailed and got right to the heart of the issue, others were really weak and were likely better off being StackOverflow questions. Sometimes the reports actually did sound like they could be bugs, but just had absolutely no detail in them to take any further. Sometimes the user just didn’t quite know how to articulate the issue, and instead of outlaying the problem via code (We are developers after all), they just gave a wall of text to try and describe the problem they were having.

With that in mind, I thought I would jot down my top 3 things I noticed these poor bug reports were lacking. While I used the EF Core project on Github as examples, I didn’t

Debug It Yourself

A simple thing, but something that didn’t look like it was happening as much as it should. The amount of code that I use in day to day work that is open source is actually astounding to think about when compared to just a few years ago. With that in mind, everytime I start getting weird exceptions that I think could be a bug, I immediately go and check if I am able to get my hands on the source code to have a look.

I think sometimes people have a fear that the code will be way over their heads and difficult to understand. But it can be as simple as searching an open source project for the exception message or error code you are getting.

Let’s take my bug for example. It related to an exception being thrown from Automapper with very specific wording. Literally all I did was try and do a search on Github for the exact error message  and what do you know!

Digging deeper we can see the following lines of code :

OK so that’s given us a pretty good idea that the configuration is being set twice for some reason. It sets our brain in motion for different ways we could test theories about why this would be happening. As it turned out, it’s because a particular method in .NET Core runs twice when it should only be running once. And the reason it only happened in particular with Automapper was because the variable itself is static in the Automapper code (Another thing we learned by taking a look at the source ourselves).

We may not be able to completely code a fix, but will atleast help us to better articulate the issue and come up with simple repo steps when we log our bug report.

Don’t forget to explain what you’ve already looked into or done when logging the issue too. I saw a couple of bugs logged that had quite a bit of back and forth along the lines of “Can you try doing this?”, and the reply being “Yes, I have already tried that”. Well then put it in the report!

Create A Complete Solution Repository

Many reports start off with “Create a web project, then add this nuget package with this particular version, and then write code that does XYZ, then …”. It quickly becomes hard to follow and actually ends up with some sort of miscommunication issue anyway.

Creating a minimal code solution in it’s repository serves two purposes. The first is that you no longer have to explain exactly how someone can replicate the issue. The second is that it actually helps you remove any possible issues with your own code. Isolating the bug to it’s simplest possible form in a nice stand alone application is one of the biggest things you can do to help move a bug report along quickly.

And remember, it doesn’t have to be some amazing engineering feat. The Github repository I submitted with my bug report can be found here. It’s literally the Web API standard project template (Complete with the default routes and controllers), with a couple of extra lines in my startup file to replicate the bug. It may seem like going to this effort to upload what is almost a default project is a big hassle, but if you don’t, then the developer looking into your bug will have to do it anyway.

Simplify The Issue To As Few Code Lines As Possible

And finally, while you may have created a repository replicating the issue, it’s still a good idea to be able to reduce the bug down to as few lines of code as possible and have this code sitting in the bug report itself. Even pseudocode is better than nothing.

Let’s try something. I’ll explain a fictitious bug first in words, then in code.

Bug explanation in words :
When I run a select on a list that contains a cast, then call first on the return list, I get an error. If I don’t cast in the select (But still run the select) on the list, then call First, I don’t get an error.

Bug explanation in code :

Which one is easier to follow for you? If you’re a developer, I can almost guarantee that looking at code is easier for you. When we’ve spent years looking at lines of code on our screen, it just *clicks* in our brain when we see it there.

Now that doesn’t mean that you shouldn’t also write a concise description of your bug too! That’s still important. And the same can be said for creating an entire solution to replicate your issue. Pasting lines of code into a bug report supplements your reproducible solution, it doesn’t replace it.

Summary

There’s a tonne of nuances around logging bugs for open source projects. Especially when the projects are as big as some of the Microsoft ones are. I could go on and on with other smaller issues that seemed to stall bug reports, but to me these were the big 3 that almost all bugs that got thrown back over the fence were lacking. What’s your tips for a good bug report? Drop a comment below and let us know.

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed


In our previous article in this series, we looked at how we might go about setting up a nuget server. With that all done and dusted, it’s on to actually creating the libraries ourselves. Through trial and error and a bit of playing around, this article will dive into how best to go about developing the packages themselves. It will include how we version packages, how we go about limiting dependencies, and in general a few best practices to play with.

Use .NET Standard

An obvious place to begin is what framework to target. In nearly all circumstances, you are going to want to target .NET Standard (See here to know what .NET Standard actually is). This will allow you to target .NET Core, Full Framework and other smaller runtimes like UWP or Mono.

The only exception to the rule is going to come when you are building something especially specific for those runtimes. For example a windows form library that will have to target full framework.

Limit Dependencies

Your libraries should be built in such a way that someone consuming the library can use as little or as much as they want. Often this shows up when you add in dependencies to your library that only a select people need to use. Remember that regardless of whether someone uses that feature or not, they will need to drag that dependency along with them (And it may also cause versioning issues on top of that).

Let’s look at a real world example. Let’s say I’m building a logging library. Somewhere along the way I think that it would be handy to tie everything up using Autofac dependency injection as that’s what I mostly use in my projects. The question becomes, do I add Autofac to the main library? If I do this, anyone who wants to use my library needs to either also use Autofac, or just have a dependency against it even though they don’t use the feature. Annoying!

The solution is to create a second project within the solution that holds all our Autofac code and generate a secondary package from this project. It would end up looking something like this :

Doing this means that anyone who wants to also use Autofac can reference the secondary package (Which also has a dependency on the core package), and anyone who wants to roll their own dependency injection (Or none at all) doesn’t need to reference it at all.

I used to think it was annoying when you searched for a nuget package and found all these tiny granular packages. It did look confusing. But any trip to DLL Hell will make you thankful that you can pick and choose what you reference.

Semantic Versioning

It’s worth noting that while many Microsoft products use a 4 point versioning system (<major>.<minor>.<build>.<revision>), Nuget packages work off whats called “Semantic Versioning” or SemVer for short. It’s a 3 point versioning system that looks like <major>.<minor>.<patch>. And it’s actually really simple and makes sense when you look at how version points are incremented.

<major> is updated when a new feature is released that breaks backwards compatibility. For example if someone upgrades from version 1.2.0 to 2.0.0, they will know that their code very likely will have to change to accommodate the upgrade.

<minor> is updated when a new feature is released that is backwards compatible. A user upgrade a minor version should not have to touch their existing code for things to continue working as per normal.

<patch> is updated for bug fixes. Bug fixes should always be backwards compatible. If they aren’t able to be, then you will be required to update the major version number instead.

We’ll take a look at how we set this in the next section, but it’s also important to point out that there will essentially be 3 version numbers to set. I found it’s easier to just set them all to exactly the same value unless there is a really specific reason to not do so. This means that for things like the “Assembly Version” where it uses the Microsoft 4 point system, the last point is always just zero.

Package Information

In Visual Studio if you right click on a project and select properties, then select “Package” on the left hand menu. You will be given a list of properties that your package will take on. These include versions, the company name, authors and descriptions.

You can set them here, but they actually then get written to the csproj. I personally found it easier to edit the project file directly. It ends up looking a bit like this :

If you edit the csproj directly, then head back to Visual Studio properties pane to take a look, you’ll see that they have now been updated.

The most important thing here is going to be all the different versions seen here. Again, it’s best if these all match with “Version” being a SemVer with 3 points of versioning.

For a full list of what can be set here, check the official MS Documentation for csproj here.

Shared Package Information/Version Bumping

If we use the example solution above in the section for “Limit Dependencies” where we have multiple projects that we want to publish nuget packages for. Then we will likely want to be able to share various metadata like Company, Copyright etc among all projects from a single source file rather than having to update each one by one. This might extend to version bumping too, where for simplicity sake when we release a new version of any package in our solution, we want to bump the version of everything.

My initial thought was to use a linked “AssemblyInfo.cs” file. It’s where you see things like this :

Look familar? The only issue was that there seemed to be quite a few properties that couldn’t be set using this, for example the “Author” metadata. And secondly when publishing a .NET Standard nuget package, it seemed to completely ignore all “version” attributes in the AssemblyInfo.cs. Meaning that every package was labelled as version 1.0.0

By chance I had heard about a feature of “importing” shared csproj fields from an external file. And as it turned out, there was a way to place a file in the “root” directory of your solution, and have all csproj files automatically read from this.

The first step is to create a file called “Directory.Build.props” in the root of your solution. Note that the name actually is “Directory”, do not replace this for the actual directory name (For some stupid reason I thought this was the case).

Inside this file, add in the following :

And by magic, all projects underneath this directory will overwrite their metadata with the data from this file. Using this, we can bump versions or change the copyright notice of all child projects without having to go one by one. And ontop of that, it still allows us to put a description in each individual csproj too!

Summary

The hardest part about all of this was probably the shared assembly info and package information. It took me quite a bit of trial and error with the AssemblyInfo.cs file to work out it wasn’t going to happen, and surprisingly there was very little documentation about using a .props file to solve the issue.

The final article in the series will look into how we package and publish to our nuget feed. Check it out here!

I’ve been having a bit of fun setting up a Nuget Server as of late, and learning the nuances of versioning a .NET Standard library. With that in mind, I thought I would document my approach to how I got things going and all the pitfalls and dead ends I ended up running into. This series will read a bit less like a stock standard tutorial and more about a brain dump of what I’ve learnt over the past few days. I should also note that in some cases, we aren’t able to go all out .NET Core mode. For example the official Nuget Server web application is full framework only – not a whole lot we can do about that.

The series will be broken into three parts. The server setup and the different options you have for actually hosting your nuget feed, different ways you can go about structuring your project and code for ease of use when packaging into a nuget feed, and how to go about actually building your package and pushing it to your server.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed

Why?

The first question you may ask is why? Why have a private nuget server? It may seem like a bit overkill but there are two scenarios where a private nuget feed will come in handy.

  1. If you or your company work on multiple projects and end up copying and pasting the same boilerplate code into every project. That’s a good reason to setup a nuget server. The actual impetuous for me to go ahead and set one up was having to copy in our custom “logging” code for the *nth* time into a project. This is pretty common in services companies where projects last 3-6 months before moving onto the next client.
  2. The second reason is similar, but it’s more around your architecture. If you have an architecture focused on microservices either by way of API’s or ESBs, then it’s likely that you are working on lots of smaller standalone projects. These projects will likely share similar code bases for logging, data access etc. Being able to plug in a custom nuget feed and grab the latest version of this shared code is a huge time saver and avoids costly mistakes.

Hosted Services

There are a number of services that will host your nuget feed for you. These usually have the bonus of added security, a nice GUI to manage things, and the added benefit of a bit of support when things go wrong. Obviously the downsides are that they aren’t free, and it also means that your packages are off-premises which could be a deal breaker for some folks.

An important thing to note is that almost all hosted solutions I looked at offered other package manager types included in your subscription. So for example hosted NPM feeds if you also want to publish NPM packages. This is something else to keep in mind if you are looking to go down this route.

I looked into two different services.

MyGet

MyGet seems to be the consensus best hosted solution out there if you are looking for something outside the Microsoft eco-system. Pricing seemed extremely cheap with only limited contributors but unlimited “readers” – let me know if I got that wrong because I re-read the FAQ over and over trying to understand it. They do offer things like ADFS integration for enterprise customers, but that’s when things start to get a tad expensive.

I had a quick play around with MyGet but nothing too indepth and honestly it seemed pretty solid. There wasn’t any major feature that I was wow’d by, but then again I was only publishing a few packages there trying things out, nothing too massive.

Something that was interesting to me was that MyGet will actually build your nuget packages for you, all you need to do is give it access to your source control.

This seems like a pretty helpful service at first, but then again, they aren’t going to be running your pipeline with unit tests and the like, so maybe not too useful. But it’s pretty nifty none the less.

Visual Studio Team Services

Microsoft VSTS also offer a hosted package manager solution (Get started here). I think as with most things in the VSTS/TFS world, it only makes sense if you are already in that eco-system or looking to dive in.

The pricing for VSTS works in two ways. It’s free for the first 5 users and every user from there costs approx $4 each. Users count as readers too so you can’t even consume a nuget package without a license (Which is pretty annoying). OR If a user has an “Enterprise” license (Basically an MSDN Subscription which most companies in the Microsoft eco-system already have), then they can use the VSTS package manager for free. As I say, this really only makes sense if you are already using VSTS already.

Now obviously when you use VSTS Package Manager, it’s also integrated into builds so it’s super easy to push new packages to the feed with a handy little drop down showing feeds already in VSTS Package Manager. I mean it’s not rocket science but it was kinda nice none the less.

The GUI for managing packages is also really slick with the ability to search, promote and unlist packages all from within VSTS.

Official Nuget Server

The alternative to using a hosted solution is to have an on-premises private nuget feed (Or possibly on a VM). There are actually a few different open source versions out there, but for now I’ll focus on the official nuget server package from Microsoft. You can check out instructions to get it up and running here.

Now this was actually my preferred option at first. To keep control of the packages and have complete control of the hosting seemed like the best option. But that quickly changes for a few reasons…

Firstly, there is zero privacy/authorization on consuming packages from the nuget server. This means you either have to run with an IP whitelist (Which IMO is a huge pain in the ass), host locally, or just pray that no one finds the URL/IP of your server. This seemed like a pretty big deal breaker.

There is an API Key authorization that you can set in the web.config, but this only restricts publishing packages, not consuming them. This in itself seemed like an annoyance because it meant that the keys to the kingdom was an API Key and with that you could essentially do what you liked with it. Because there are no “promoting” of packages in the feed (Everything published is automatically the latest version in the feed), this seemed like it could lead to trouble.

The packages themselves are actually stored on the server disk with no ability to offload them to something like Blob/S3. This seemed alright at first (Disk space is pretty cheap), but it also means you can’t use a hosted solution like Azure Websites because when deploying to these, it blows away the entire previous running instance – with your packages getting the treatment at the same time. This means you had to use actual hardware like a VM or host it locally on a machine.

So the obvious thought was, let’s just host it locally in the office in a closet. This gives us complete control and means no one from the outside can access it. But, what if we do want someone from the outside world to access it? Like if someone is working from home? Well then we could setup a VPN and th…. you know what. Forget it. Let’s use VSTS.

Not to mention there is no GUI to speak of :

There is a package called “Nuget Gallery” that seems to add a facelift to everything, but by this point it was a lost cause.

Summary

In the end I went with VSTS. Mostly because the company I am working with already has VSTS built into their eco-system for both builds and source control, it just made sense. If they didn’t, I would have certainly gone with MyGet over trying to host my own. Hosted solutions just seemed be much more feature rich and often for pennies.

Next up we will be talking about how I structured the code when building my libraries, and some tricks I learnt about .NET Standard versioning (Seriously – so much harder than I thought). Check it out here!

This article is part of a series on the OWASP Top 10 for ASP.net Core. See below for links to other articles in the series.

A1 – SQL Injection A6 – Sensitive Data Exposure (Coming Soon)
A2 – Broken Authentication and Session Management A7 – Insufficient Attack Protection (Coming Soon)
A3 – Cross-Site Scripting (XSS) A8 – Cross-Site Request Forgery (Coming Soon)
A4 – Broken Access Control A9 – Using Components with Known Vulnerabilities (Coming Soon)
A5 – Security Misconfiguration (Coming Soon) A10 – Underprotected APIs (Coming Soon)

Broken Access Control is a new entry into the OWASP Top 10. In previous years there were concepts called “Insecure Direct Object References” and “Missing Function Level Access Controls” which have sort of been bundled all together with a couple more additions. This entry reminds me quite a bit of Broken Authentication because it’s very broad and isn’t that specific. It’s more like a mindset with examples given on things you should look out for rather than a very particular attack vector like SQL Injection. None the less, we’ll go through a couple of examples given to us by OWASP and see how they work inside .NET Core.

What Is Broken Access Control

Broken Access Control refers to the ability for an end user, whether through tampering of a URL, cookie, token, or contents of a page, to essentially access data that they shouldn’t have access to. This may be a user being able to access somebody elses data, or worse, the ability for a regular user to elevate their permissions to an admin or super user.

The most common vulnerability that I’ve seen is code that verifies that a user is logged in, but not that they are allowed to access a particular piece of data. For example, given a URL like www.mysite.com/orders/orderid=900  what would happen if we changed that orderid by 1? What if a user tried to access  www.mysite.com/orders/orderid=901. There will likely be code to check that a user is logged in (Probably a simple Authorize attribute in .NET Core). But how about specific code to make sure that the order actually belongs to that particular customer? It’s issues like this that make up the bulk of the Broken Access Control.

Where I start to veer away from the official definition from OWASP is that the scope starts exploding after this. I think when you try and jam too much under a single security headline, it starts losing it’s meaning. SQL Injection for example is very focused and has very specific attack vectors. Access Control is much more broader subject. For example, the following is defined as “Broken Access Control” by OWASP :

  • Misconfigured or too broad CORS configuration
  • Web server directory listing/browsing
  • Backups/Source control (.git/.svn) files present in web roots
  • Rate limiting of APIs
  • JWT Tokens not being invalidated on logout

And the list goes on. Essentially if you ask yourself “Should a web user be able to access this data in this way”, and the answer is “no”, then that’s Broken Access Control in a nutshell.

Insecure Direct Object References

As we’ve already seen, this was probably the grandfather of Broken Access Control in the OWASP Top 10. Direct object references are id’s or reference variables that are able to be changed by an end user, and they can then retrieve records that they should not be privy to.

As an example, let’s say we have the following action in a controller :

Now this is authorized somewhat as seen by the authorize header. This means a completely anonymous user can’t hit this endpoint, but it doesn’t stop one logged in user accessing a different user’s orders. Any id passed into this endpoint will return an order object no questions asked.

Let’s modify it a bit.

This will require a bit of imagination, but hopefully not too much!

Imagine that when we log in our user, we also store their customerid in a claims. That means we forever have access to exactly who is logged in when they try and make requests, not just that they are loggedin in general. Next, on each Order we store the CustomerId of exactly who the order belongs to. Finally, it means that when someone tries to load an order, we get the Id of the customer logged in, and compare that to the CustomerId of the actual order. If they don’t match, we reject the request. Perfect!

An important thing to note is that URL’s aren’t the only place this can happen. I’ve seen hidden fields, cookies, and javascript variables all be susceptible to this kind of abuse. The rule should always be that if it’s in the browser, a user can modify it. Server side validation and authorization is king.

One final thing to mention when it comes to direct object references is that “Security through obscurity” is not a solution. Security through obscurity refers to a system being secure because of some “secret design” or hidden implementation being somewhat “unguessable” by an end user. In this example, if we change our initial get order method to :

So now our OrderId is a Guid. I’ve met many developers that think this is now secure. How can someone guess a Guid? It is not feasible to enumerate all guids against a web endpoint (If it was feasible than we would have many more clashes), and this much is true. But what  if someone did get a hold of that guid from a customer? I’ve seen things like a customer sending a screenshot of their browser to someone else, and their Order Guid being in plain sight. You can’t predict how this Guid might leak in the future, malicious or stupidness. That’s why again, server side authorization is always king.

CORS Misconfiguration

CORS refers to limiting what website can place javascript on their page to then call your API. We have a great tutorial on how to use CORS in ASP.net Core here. In terms of OWASP, the issue with CORS is that it’s all too easy to just open up your website to all requests and call it a day. For example, your configuration may look like the following :

Here you are just saying let anything through, I don’t really care. The security problem becomes extremely apparent if your API uses cookies as an authentication mechanism. A malicious website can make a users browser make Ajax calls to the API, the cookies will be sent along with the request, and sensitive data will be leaked. It’s that easy.

In simple terms, unless your API is meant to be accessed by the wider public (e.g. the data it is exposing is completely public), you should never just allow all origins.

In ASP.net Core, this means specifying which websites are allowed to make Ajax calls.

As with all security configurations, the minimum amount of access available is always the best setting.

Directory Traversal and Dangerous Files

I lump these together because they should be no-brainers, but should always be on your checklist when deploying a site for the first time. The official line in the OWASP handbook when it comes to these is :

Seems pretty simple right? And that’s because it is.

The first is that I’ve actually never had a reason to allow directory traversal on a website. That is, if I have a URL like www.mysite.com/images/image1.jpg , a user can’t simply go to www.mysite.com/images/ and view all images in that directory. It may seem harmless at first, like who cares if a user can see all the images on my website. But then again, why give a user the ability to enumerate through all files on your system? It makes no sense when you think about it.

The second part is that metadata type files, or development only files should not be available on a web server. Some are obviously downright dangerous (Site backups could leak passwords for databases or third party integrations), and other development files could be harmless. Again it makes zero sense to have these on a web server at all as a user will never have a need to access them, so don’t do it!

Summary

While I’ve given some particular examples in this post, Broken Access Control really boils down to one thing, If a user can access something they shouldn’t, then that’s broken access control. While OWASP hasn’t put out too much info on this subject yet, a good place to start is the official OWASP document for 2017 (Available here). Within .NET Core, there isn’t any “inherit” protection against this sort of thing, and it really comes down to always enforcing access control at a server level.

Next post we’ll be looking at a completely new entry to the OWASP Top 10 for 2017, XML External Entities (XXE).

Benchmarking your code can take on many forms. On some level Application Performance Monitoring (APM) solutions such as New Relic can sometimes be considered live benchmarking tools if you are using A/B testing. All the way down to wrapping a stopwatch object around your code and running it inside a loop. This article will be looking more towards the latter. Benchmarking specific lines of code either against each other or on it’s own to get performance metrics can be extremely important in understanding how your code will run at scale.

While wrapping your code in a timer and running it a few hundred times is a good start, it’s not exactly reliable. There are far too many pitfalls that you can get trapped in that completely skew your results. Luckily there is always a Nuget package to cover you! That package in this case is BenchmarkDotNet. It takes care of things like warming up your code, isolating each benchmark from each other, and giving you metrics on code performance. Let’s jump straight in!

Code Benchmarking

Code Benchmarking is when you want to compare two pieces of code/methods against each other. It’s a great way to quantify a code rewrite or refactor and it’s going to be the most common use case for BenchmarkDotNet.

To get started, create a blank .NET Core console application. Now, most of this “should” work when using .NET Full Framework too, but I’ll be doing everything here in .NET Core.

Next you need to run the following from your Package Manager console to install the BenchmarkDotNet nuget package :

Next we need to build up our code. For this we are going to use a classic “needle in a haystack”. We are going to build up a large list in C# with random items within it, and place a “needle” right in the middle of the list. Then we will compare how doing “SingleOrDefault” on a list compares to “FirstOrDefault”. Here is our complete code :

Walking through this a bit, first we create a class to hold our benchmarks within it. This can contain any number of private methods and can include setup code within the constructor. Any code within the constructor is not included in the timing of the method. We can then create public methods and add the attribute of  [Benchmark] to have them listed as items that should be compared and benchmarked.

Finally inside our main method of our console application, we use the “BenchmarkRunner” class to run our benchmark.

A word of note when running the benchmarking tool. It must be built in “Release” mode, and run from the command line. You should not use benchmarks run from Visual Studio as this also attaches a debugger and is not compiled as “optimized”. To run from the command line, head to your applications bin/Release/netcoreappxx/ folder, then run  dotnet {YourDLLName}.dll

And the results?

So it looks like Single is twice as slow as First! If you understand what Single does under the hood, this is to be expected. When First finds an item, it immediately returns (After all, it only wants the “First” item). However when Single finds an item, it still needs to traverse the entire rest of the list because if there is more than one, it needs to throw an exception. This makes sense when we are placing the item in the middle of the list!

Input Benchmarking

Let’s say that we’ve found Single is slower than First. And we have a theory on why that is (That Single needs to continue through the list), then we may need a way to try different “configurations” without having to re-run the test with minor details changed. For that we can use the “Input” feature of BenchmarkDotNet.

Let’s modify  our code a bit :

What we have done here is create a “_needles” property to hold different needles we may wish to find. And we’ve inserted them at different indexes within our list. We then create a “Needle” property with the attribute of ParamsSource”. This tells BenchmarkDotNet to rotate through these and run a different test for each possible value.

A big tip is that the ParamsSource must be public, and it must be a property – It cannot be a property.

Running this, our report now looks like so :

It’s a little harder to see because we are now down to nanoseconds based on the time that “First” takes to return the StartNeedle. But the results are very clear.

When Single is run, the time it takes to return the needle is the same regardless of where it is in the list. Whereas First’s response time is totally dependent on where the item is in the list.

The Input feature can be a huge help in understand how or why applications slow down given different inputs. For example does your password hashing function get slower when passwords are longer? Or is it not a factor at all?

Creating A Baseline

One last helpful tip that does nothing more than create a nice little “multiplier” on the report is to mark one of your benchmarks as the “baseline”. If we go back to our first example (Without Inputs), we just need to mark one of our Benchmarks as baseline like so :  [Benchmark(Baseline = true)]

Now when we run our test with “First” marked as the baseline, the output now looks like :

So now it’s easier to see the “factor” by which our other methods are slower (Or faster). In this case our Single call is almost twice as slow as the First call.

What Have You Benchmarked?

As programmers we love seeing how a tiny little change can improve performance by leaps and bounds. If you’ve used BenchmarkDotNet to benchmark a piece of code and you’ve been amazed by the results, drop a comment below with a Github Gist of the code and a little about the what/why/how it was slow!

JSONPatch is a method of updating documents on an API in a very explicit way. It’s essentially a contract to describe exactly how you want to modify a document (For example, replace the value in a field with another value) without having to also send along the rest of the unchanged values.

What Does A JSON Patch Request Look Like?

The official documentation for JSON Patch lives here : http://jsonpatch.com/, but we’ll do a bit of digging to see how it works inside ASP/C# as not all applications will work. Infact one operation has not yet made it into an official release of ASP.net Core, but we’ll do a quick talk about it anyway.

For all examples, I will be writing JSON Patch requests against an object that looks like so in C# :

Patch requests all follow a similar type of structure. It’s a list of “operations” within an array. The operation itself has 3 properties.

“op” – Defines the “type” of operation you want to do. For example add, replace, test etc.
“path” – The “path” of the property on the object you want to edit. In our example above, if we wanted to edit “FirstName” the “path” property would look like “/firstname”
“value” – For the most part, defines the value we want to use within our operation.

Now let’s look at each individual operation.

Add

The Add Operation typically means that you are adding a property to an object or “adding” an item into an array. For the former, this does not work in C#. Because C# is a strongly typed language, you cannot “add” a property onto an object that wasn’t already defined at compile time.

To add an item into an array the request would look like the following :

This would “insert” the value of “Mike” at index 1 in the Friends array. Alternatively you can use the “-” character to insert a record at the end of the array.

Remove

Similar to the “Add” operation outlined above, the Remove Operation typically means you are either removing a property from an object or removing an item from an array. But because you can’t actually “remove” a property from an object in C#, what actually happens is that it will set the value to default(T). In some cases if the object is nullable (Or a reference type), it will be set to NULL. But be careful because when used on value types, for example an int, then the value actually gets reset to “0”.

To run Remove on an object property to “reset” it, you would run the following :

You can also run the Remove operation to remove a particular item in an array

This removes the item from array index 1. There is no such “where” clause for removing or deleting so at times this can seem pretty dangerous to just remove an item from an array index like this since what if the array has changed since we fetched it from the server? There is actually a JSON Patch operation that will help with this, but more on that later.

Replace

Replace does exactly what it says on the tin. It replaces any value for another one. This can work for simple properties on objects :

It can also replace particular properties inside an array item :

And it can also replace entire objects/arrays like so :

Copy

Copy will move a value from one path to another. This can be a property, object, array etc. In the example below we move the value of the Firstname to the LastName. In practical terms this doesn’t come up a whole lot because you would typically see a simple replace on both properties rather than a copy, but it is possible! If you actually dive into the source code of the ASP.net Core implementation of JSON Patch, you will see that a copy operation simple does an “Add” operation in the background on the path anyway.

Move

The Move operation is very similar to a Copy, but as it says on the tin, the value will no longer be at the “from” field. This is another one that if you look under the hood of ASP.net Core, it actually does a remove on the from field, and an add on the Path field.

Test

The Test operation does not currently exist in a public release of ASP.net Core, but if you check the source code up on Github, you can see that it’s already being worked on by Microsoft and should make it to the next release. Test is almost a way of doing optimistic locking, or in simpler terms a check to see if the object on the server has changed since we retrieved the data.

Consider the following full patch :

What this says is, first check that on the path of “/FirstName” that the value is Bob, and if it is, change it to Jim. If the value is not Bob, then nothing will happen. An important thing to note is that you can have more than one Test operation in a single patch payload, but if any one of those Tests fail, then the entire Patch will not be applied.

But Why JSON Patch Anyway?

Obviously a big advantage to JSON Patch is that it’s very lightweight in it’s payload, only sending exactly what’s changed on the object. But there is another big benefit in ASP.net Core, it actually has some great utility for the sole reason that C# is a typed language. It’s hard to explain without a good example. So imagine I request from an API a “Person” object. In C# the model may look like :

And when returned from an API as a JSON object, it looks like so :

Now from the front end, without using JSON Patch,  I decide that I only need to update the firstname so I send back the following payload :

Now a question for you when I deserialize this model in C#. Without looking below. What will the values of our model be?

Hmmm. So because we didn’t send through the LastName, it gets serialized to Null. But that’s easy right, we can just ignore values that are null and do something finicky on our data layer to only update the fields we actually passed through. But that’s not necessarily true, what if the field actually is nullable? What if we sent through the following payload :

So now we’ve actually specified that we want to null this field. But because C# is strongly typed, there is no way for us to determine on the server side that a value is missing from the payload vs when it’s actually been set to null with standard model binding.

This may seem like an odd scenario, because the front end can just always send the full model and never omit fields. And for the most part the model of a front end web library will always match that of the API. But there is one case where that isn’t always true, and that’s mobile applications. Often when submitting a mobile application to say the Apple App Store, it may take weeks to get approved. In this time, you may also have web or android applications that need to be rolled out to utilize new models. Getting synchronization between different platforms is extremely hard and often impossible. While API versioning does go a long way to taking care of this, I still feel JSON Patch has great utility in solving this issue in a slightly different way.

And finally, it’s to the point! Consider the following JSON Patch payload for our Person object :

This explicitly says that we want to change the first name and nothing else. There is never ambiguity of “did we just forget to send that property or do we actually want it ‘nulled out'”. It’s precise and tells us exactly what’s about to happen.

Adding JSON Patch To Your ASP.net Core Project

Inside Visual Studio, run the following from the Package Manager console to install the official JSON Patch library (It does not come with ASP.net Core in a new project out of the box).

For this example, I’ll use the following full controller. The main point to note is that the HTTP Verb we use is “Patch”, we accept a type of “JsonPatchDocument<T>” and that to “apply” the changes we simply call “ApplyTo” on the patch and pass in the object we want to update.

In our example we are just using a simple object stored on the controller and updating that, but in a real API we will be pulling the data from a datasource, applying the patch, then saving it back.

When we call this endpoint with the following payload :

We get the response of :

Awesome! Our first name got changed to Bob! It’s really that simple to get up and running with JSON Patch.

Generating Patches

The first question many people ask is exactly how should they be crafting their JSON Patch payloads. You do not have to do this manually! There are many javascript libraries that can “compare” two objects to generate a patch. And even easier, there are many that can “observe” an object and generate a patch on demand. The JSON Patch website has a great list of libraries to get started : http://jsonpatch.com/

Using Automapper With JSON Patch

The big question I often see around JSON Patch, is that often you are returning View Models/DTOs from your API, and generating patches from there. But then how do you apply those patches back onto a database object? People tend to overthink it and create crazy “transforms” to turn a patch from one object to another, but there is an easier way using Automapper that is extremely simple to get going. The code works like so :

An important thing to note is that when mapping the DTO back to the Database object in the second Map call, Automapper has a great function where it can map onto an existing object, and only map fields that it cares about. Any additional fields that are on the database object will be left alone.

If you need help setting up Automapper, check out our great tutorial on Using Automapper In ASP.net Core.

In previous posts I’ve talked about how you can now use the legacy SMTPClient class inside .NET to send emails. As commentators on this post have pointed out however, this has now been deprecated and the official documentation actually points you towards a very popular email library called “MailKit“. It’s open source, it’s super extensible, and it’s built on .NET Standard meaning that you can use the same code across .NET Full Framework, UWP and .NET Core projects.

Creating An Email Service

It’s always good practice that when you add in a new library, that you build an abstraction on top of it. If we take MailKit as an example, what if MailKit is later superceded by a better emailing library? Will we have to change references all over our code to reference this new library? Or maybe MailKit has to make a breaking change between versions, will we then have to go through our code fixing all the now broken changes?

Another added bonus to creating an abstraction is that it allows us to map out how we want our service to look before we worry about implementation details. We can take a very high level view of sending an email for instance without having to worry about exactly how MailKit works. Because there is a lot of code to get through, I won’t do too much explaining at this point, we will just run through it. Let’s go!

First, let’s go ahead and create an EmailAddress class. This will have only two properties that describe an EmailAddress.

Now we will need something to describe a simple EmailMessage. There are a tonne of properties on an email, for example attachments, CC, BCC, headers etc but we will break it down to the basics for now. Containing all of this within a class means that we can add extra properties as we need them later on.

Now we need to setup our email configuration. That’s our SMTP servers, ports, credentials etc. For this we will make a simple settings class to hold all of this. Since we are good programmers we will use an interface too!

Now we actually need to load this configuration into our app. In your appsettings.json, you need to add a section at the root for email settings. It should look something like this :

In the ConfigureServices method or your startup.cs, we can now pull out this configuration and load it into our app with a single line.

This allows us to inject our configuration class anywhere in our app.

The final piece of the puzzle is a simple email service that can be used to send and receive email. Let’s create an interface and an implementation that’s empty for now. The implementation should accept our settings object as a constructor.

Head back to our ConfigureServices method of our startup.cs to add in a final line to inject in our EmailService everywhere.

Phew! And we are done. If at this point we decided MailKit isn’t for us, we still have an email service that can swap in and out libraries as it needs to, and our calling application doesn’t need to worry about what’s going on under the hood. That’s the beauty of abstracting a library away!

Getting Started With MailKit

Getting started with MailKit is as easy as installing a Nuget package. Simply run the following from your Package Manager Console :

And hey presto! You now have access to MailKit in your application

Sending Email via SMTP With MailKit

Let’s head back to our email service class and fill out the “Send” method with the actual code to send an email via MailKit. The code to do this is below :

The comments should be pretty self explanatory, but let’s quickly run through it.

  • You can send clear text or HTML emails depending on the “TextFormat” you use when creating your message body
  • MailKit has named it’s Smtp class “SmtpClient” which is the same as the framework class. Be careful if you are using Resharper and the like that when you click “Add Reference” you are adding the correct reference.
  • You should choose to use SSL whenever available when connecting to the SMTP Server

Because we built out our EmailService, EmailMessage and EmailConfiguration classes earlier, they are all ready to be used immediately!

Receiving Email via POP With MailKit

And now the code to receive email via POP.

Again, all rather straight forward.

While we only retrieve a few basic details about the email message, the actual MailKit email object has a tonne of data you can inspect including headers, CC addresses, etc. Extend as you need to!

Free SMTP Server

It’s worth mentioning that if you are a hobbyist with your own website and want to just send a few emails every now and again under your own domain. A great solution that I use for this very blog is MailGun. It has a great free plan that for most people will be more than enough, but also paid plans for when you really need to start sending a lot of email.

This article is part of a series on the OWASP Top 10 for ASP.net Core. See below for links to other articles in the series.

A1 – SQL Injection A6 – Sensitive Data Exposure (Coming Soon)
A2 – Broken Authentication and Session Management A7 – Insufficient Attack Protection (Coming Soon)
A3 – Cross-Site Scripting (XSS) A8 – Cross-Site Request Forgery (Coming Soon)
A4 – Broken Access Control A9 – Using Components with Known Vulnerabilities (Coming Soon)
A5 – Security Misconfiguration (Coming Soon) A10 – Underprotected APIs (Coming Soon)

In previous iterations of this post, I had a big long text wall explanation of what exactly Cross Site Scripting (XSS) was. But after spending hours perfecting it, I think it’s easier to show you a simple screenshot that says it all.

This was simple to do. I have a “search” page for a user, and any query they type I relay back to them in the form of “Search Query {YourQueryHere}”. The code looks like this :

So we are taking whatever the user searched (Or put in the query string) and placing it directly on the page. Thus allowing the user to enter script tags, or really anything they want in there. This is essentially at the heart of what XSS is about. Taking unverified user input, and displaying it wholesale on a webpage.

For the duration of this post, I will refer back to “code I prepared earlier”. This code is up on Github if you want to take a look and test out some XSS yourself. You can download it here.

What Is XSS?

XSS is when a webpage enables an attacker to inject client side scripts (Typically javascript although other types of injections are possible) onto a webpage that is then subsequently shown to other users. Often these scripts seek to steal private data (For example cookies or browser storage), redirect a browser, or sometimes even just trick a user into doing an action that they wouldn’t normally do.

XSS is usually defined into two different types :

Reflected XSS

Reflected XSS is when cross site scripting occurs immediately as a result of the input from a user. An example might be when a user searches, and that search query is displayed immediately on the page.  Typically the danger from XSS comes from the ability to send a link to an unsuspecting user, and that user see something completely unexpected.

Stored XSS

Stored XSS is when you are able to save something to a database or backend store, and have it relayed to users without having to send them a link. If we use an example of a blog that accepts comments on posts. If you are able to store a XSS exploit in a blog comment, then everyone who views that blog post from then on will be affected. Obviously this has the potential to be a much larger exploit than reflected XSS because it doesn’t depend on the user being sent a dodgy link or having to do anything extra on their part.

What Could Someone Do With XSS?

Javascript

The holy grail is of course to be able to inject script tags on a webpage. With this, the world really is the attackers oyster. They could do something as simple as redirecting the user to a different page (Where they then steal a users credentials), they could inject javascript to build a fake login form right there on the page (Where they then steal a users credentials), or they could even use it to steal a users login cookie (Where they then steal a users credentials). It can be devastating.

While injecting javascript on a page can be nothing short of devastating, protecting your site should be more than just disallowing the word “script” to be submitted anywhere. You can actually do some pretty dangerous stuff without CSS at all.

CSS

By injecting styles into a page, an attacker could change the entire layout of the page to trick the user into doing something they don’t want to do. A “clever” exploit I saw in the past was an attacker redesigning a page to trick the user into deleting their own account by moving the delete button around and changing the text (All now possible that you can add “content” inside CSS).

With CSS alone, you can pretty much rewrite the entire website. Let’s take a quick looking using a site I whipped up early. (Again, you can get the source code on Github here). By default it looks a bit like this :

 

Now let’s try something. Let’s try and inject the following CSS Payload in here :

So the URL will look something like :

Now when we view this URL :

IFrames

Injecting IFrames is an XSS exploit that can go undetected for quite some time because it can be essentially invisible to end users. IFraming can be as “harmless” as someone trying to rack up views on their own site that contain ads that pay per view, to something as harmful as IFraming a fake login form into the page.

 

HTML Encoding User Output

Now, if you’ve been looking at my sample code, you would have noticed something a bit iffy. I’m talking about this  Html.Raw(Context.Request.Query["query"]) . And I want to admit, I cheated a little. You see, by default, when ASP.net Core Razor outputs values onto a page, it always encodes them. If we remove this raw tag helper and try and inject a script tag, we instead see this on the page :

So why didn’t this actually run the script tag? It looks like it should right? Let’s actually look at the source code of the page then.

Look at how our script tag actually got written to the HTML. It’s been escaped for us meaning that the script tag hasn’t actually been ran! Hurrah! So for content that gets written directly to the page, we are actually somewhat protected by the framework.

How about other places we are displaying this data? Maybe we are building a SPA and not using ASP.net Core Razor at all. Every javascript library (Even jQuery) will actually encode data for you. But it pays to check whether this is an automatic or manual process, and anywhere you are outputting user input directly into HTML should be triple checked for valid encoding.

An interesting argument to note is that I have come across developers who insist on HTML Encoding things as they are stored in the database, and then displaying them as is on the webpage (Typically when you aren’t using Razor so you don’t get the auto encoding). This will work, but I think it’s a bad practice to follow. When you only encode data when you store it, you leave yourself open for reflected XSS attacks because this data is never stored anywhere (It’s shown directly back to the user).

URL Encoding User Input

While HTML encoding is fine when you are outputting user data directly into HTML. But at times you may need to accept user input and put it into a URL. URL’s do not encode with the same characters as HTML, so you may find yourself trying to override everything with the Raw tag helper. Do not do this! .NET Core has you covered with URL Encode.

To access it, you first need to install the the following nuget package from your package manager console in Visual Studio :

You can then encode your URL’s in a view like so :

Browser Protection

An interesting point to note is that browsers are jumping in to protect users against XSS. With a very simple payload of  <script>alert("this is an XSS")</script> in Chrome, I actually end up with the following :

In saying that, you can basically find cheat sheets online that show you values to try and bypass these filters. A good writeup by a user trying to get around Chrome’s XSS filter can be found here https://blog.securitee.org/?p=37. Essentially, it’s always going to be an arms race and relying on a browser to protect your users would be fool hardy. Not to mention users who don’t update their browsers anyway.

X-XSS-Protection Headers

This is another way of protecting your users that fall into the basket of “great to have, but encode your output please”. Using the X-XSS-Protection header you are able to direct a browser how best to handle any XSS exploits it detects. The header has 4 different values you can user.

X-XSS-Protection: 0
Attempts to disable XSS Protection in the browser (Good if you want to try and test things out).

X-XSS-Protection: 1
Enables XSS protection but if XSS is detected, it will try and sanitize the output (e.g. Encode it or strip the characters). This can be dangerous to do because the resulting HTML could be just as dangerous. See here : http://blog.innerht.ml/the-misunderstood-x-xss-protection/.

X-XSS-Protection: 1; mode=block
Enables XSS protection and will block the page loading all together if any XSS exploit is detected.

X-XSS-Protection: 1; report=<reporting-uri>
This is a chromium only header that will allow you to report back to you any URL’s that were detected in having an XSS exploit.

Realistically the only option you should be using is block as this will protect you the best. You can set the header at the code or server level. For the code level, it’s as easy as adding an additional middleware in your pipeline. Something like so :

If you are interested in further reading, we have an entire article dedicated to just the X-XSS-Protection header, and another on 3 Security Headers That Every Site Should Have.

Summary

Cross Site Scripting is one of those exploits that refuses to die, mostly because of people not doing the basics right. As we’ve seen, in ASP.net Core our razor tag helpers are a great out of the box solution to protecting us, and indeed HTML encoding in general no matter the framework will solve a big deal of our problems. Browsers are making big strides in trying to protect people too, but this doesn’t mean developers can suddenly become complacent about their role in protecting end users.

In our next topic from the OWASP Top 10 – 2017, We will be tackling Broken Access Control.