This article is part of a series on the OWASP Top 10 for ASP.net Core. See below for links to other articles in the series.

A1 – SQL Injection A6 – Sensitive Data Exposure (Coming Soon)
A2 – Broken Authentication and Session Management A7 – Insufficient Attack Protection (Coming Soon)
A3 – Cross-Site Scripting (XSS) (Coming Soon) A8 – Cross-Site Request Forgery (Coming Soon)
A4 – Broken Access Control (Coming Soon) A9 – Using Components with Known Vulnerabilities (Coming Soon)
A5 – Security Misconfiguration (Coming Soon) A10 – Underprotected APIs (Coming Soon)

In our previous article on the OWASP Top 10 we talked about SQL Injection. Where SQL Injection has a pretty definitive explanation and examples, this next one on “Broken Authentication and Session Management” is a bit more open ended. It covers everything from bad password storage systems (Plain text, weak hashing) to exposing of a session to a user that can then be stolen (For example a session string in a URL) all the way to simple things such as timing out a authenticated session.

As always, while the topics we talk about here are a good start to protecting your ASP.net Core application, they are by no means the end of the road. Especially when it comes to things like hashing of passwords. It’s a game of always staying up to date with the latest threats and updating your application as you go along.

Let’s jump right in!

Password Hashing

It goes without saying that all passwords stored in a database should be hashed and have an individual salt (More on the individual salt later). Under no circumstances should passwords be stored as plain text. When you store plain text passwords, not only do you run the risk of if you get hacked, users on your website having their account stolen, but because people tend to use the same password on multiple sites, you run the risk of then being the source of pain for a user across every website they use.

Using ASP.net Core Identity

If you use ASP.net Core Identity framework, out of the box you are going to have secure password hashes and an individual salt used. Identity uses the PBKDF2 hashing function for passwords, and they generate a random salt per user. Ideally if you are unsure on what you are doing, use the out of the box functionality! Microsoft actually have pretty good documentation on getting the framework up and running.

Rolling Your Own

While using the out of the box ASP.net Core Identity framework is definitely preferable, there may be times where you just need to roll your own. But the roll your own only extends to the C# code that you are using to authenticate, under no circumstances should you “invent” a hashing algorithm of your own to store passwords.

OWASP recommends 4 different one way hashing functions for storing passwords. In order they are Argon2, PBKDF2, scrypt and bcrypt. If you intend to write your own authentication layer you must use one of these.

What Is Salting?

Salting is the act of adding a random string of text to your password before hashing it. In this way, even if the same password is hashed, the resulting hash will be different… Confused? Let’s use an example. I will use the PBKDF2 hashing function for this example.

Let’s say, I am using the password apples4tea . When I hash this I get the result of :  09ADB1C51A54C33C11CD3AE113D820305EFA53A173C2FF4A3150B5EC934E76FF . Now if a second user signs up to my site and uses the same password, they will have exactly the same hash. You can test this yourself by using a PBKDF2 Calculator online calculator here. Why is this bad? Well it means any hacker can essentially “pre-compute” what a password hash would be (For example, taking a list of the most popular passwords today), and simply do a string compare in a database.

Ontop of this, it means that all users who share the same password have the same hash. When one users password is “cracked” or even guessed, anyone else who uses that same password now has their password leaked also.

Now let’s say I add in a random salt for the user and I concatenate it onto the start of the password. The first time round my salt is H786Bnh54A . Hashing the full string of  H786Bnh54Aapples4tea  gives me the hash of  DfsjpycJwtWkOu8UcP8YXC/G09HES8LU+kku0iSllO4= . Another user signs up to the site and a randomly generated hash is given to them of  76HNhg67Ac . Now we hash the salt with the same password and we end up with a hash of  RP62+SmFCJLeQzROTtk5HpMId0zuFtsPeBFuBLpH/Sc=. Now we have the same password, and yet have a different hash.

Quite famously, Adobe had a giant data breach in 2013 that leaked users passwords. The passwords were not per user salted, and worst of all the password “hints” were stored along side the passwords. Meaning if a hint helped you guess one users password, any user using the exact same password would have the same hash in the database! Sophos did a great write up of the breach here. And of course there is an XKCD of the incident too.

If you use ASP.net Identity, the salt is stored with the password in the same column in the database (Because the salt is the same length everytime, we just count the characters as the salt, with the remainder being the hash).In previous iterations of ASP Membership (in .NET full framework), the salt was stored in a separate column. Either are fine as long as the salt is per user, and not application wide.

Exposing Session Identifiers

In previous versions of ASP.net, you could have a “cookieless” session. You would then see URL’s that looked not too dissimilar from the following :  http://www.example.com/(S(lit3py55t21z5v55vlm25s55))/orderform.aspx  . Where the session data is actually contained within the URL rather than in a cookie. This caused havoc for multiple reasons. Imagine sending this URL to a friend, and now they have access to your session data! Even worse is that if you click on any outbound links on this page, the referrer header will be set to include your session.

In ASP.net Core, cookieless sessions were actually never implemented so you won’t see URL’s like this. It doesn’t mean you won’t see people going ahead and trying to implement something similar in a roll-your-own type fashion however. In simple terms, no URL that is given to another user via any means should be able to suddenly impersonate the user.

Sending Data Over Unencrypted Connections

There is actually an entire OWASP Top 10 article coming up about encrypted connections, but it deserves special mention here. Even if your passwords are all hashed with a per user salt, and you don’t expose session identifiers in the URL, it means nothing if you are susceptible to “Man in the middle” attacks. A user connecting to some dodgy airport open wifi could mean they unintentionally broadcast our their clear text password when logging in.

SSL means you are encrypted from the a users computer to the server, no matter the route it takes – dodgy wifi included. Given free SSL providers such as Let’s Encrypt popping up, and even sites like Cloudflare offering free SSL. There is really no reason to not protect your users by taking advantage of SSL.

Lockouts, Timeouts, And More

On a final note, We’ve talked about ASP.net Core Identity so much in this article. But again I would like to point out just how much it enforces good habits. Let’s just take a look at the setup for Identity that you would typically put in your Configure Services method.

Look at the options you have here. There are very good password enforcement options (Although arguments can be made that you shouldn’t go overboard on requiring things like alphanumeric as people tend to then just write them down on pieces of paper…), good options for locking a users account if they continually type in bad passwords, and even requiring email confirmation (Which again, is part of the Identity framework).

And in terms of how the cookies are stored on a users machine, we have these options :

Again all great best practices. A users cookie should always have an expiration by default. An example from OWASP even explicitly states the “public computer” session timeout issue in examples of attacks :

Ontop of that, we should always use HttpOnly cookies where possible. When a cookie is set to HttpOnly, it means it is not accessible via Javascript (And as such, possible XSS vulnerabilities), and can only be accessed when sent as part of a request.

I use these examples because they are great pointers to best practice when it comes to authentication and session management. Even if you decide to roll your own, you should investigate what Identity gives you out of the box so that you can replicate some or all of the functionality, and always keep up to date with the latest improvements to the framework.

Summary

Authentication and session management is such a broad topic and one that I think is plagued by slow moving legacy applications. Often when I’ve shown new developers things such as session identifiers in the URL, they can’t imagine why someone would ever do that. “Did people actually ever think that was secure?” is an actual quote from a junior developer I’ve worked with. I think that’s actually the essence of the topic, that we need to be on the forefront of what’s new and secure when it comes to authentication and sessions, so that we stay ahead of the “Did people actually every think that was secure?” curve.

In our next article, we will investigate the never ending fight against Cross-Site Scripting (Or XSS). See you then!

This article is part of a series on the OWASP Top 10 for ASP.net Core. See below for links to other articles in the series.

A1 – SQL Injection A6 – Sensitive Data Exposure (Coming Soon)
A2 – Broken Authentication and Session Management A7 – Insufficient Attack Protection (Coming Soon)
A3 – Cross-Site Scripting (XSS) (Coming Soon) A8 – Cross-Site Request Forgery (Coming Soon)
A4 – Broken Access Control (Coming Soon) A9 – Using Components with Known Vulnerabilities (Coming Soon)
A5 – Security Misconfiguration (Coming Soon) A10 – Underprotected APIs (Coming Soon)

OWASP, or the Open Web Application Security Project, is a non profit organization whose purpose is to promote secure web application development and design. While they run different workshops and events all over the world, you have probably heard of them because of the “OWASP Top Ten” project. Every few years, OWASP publishes a top 10 list of the most critical web security risks. While by no means does it mean if you can check off these 10 items, your website is now secure, but it definitely covers your bases for the most common attack vectors on the web.

When I first started programming and heard about OWASP, the hardest thing for me was trying to put that into practical terms. When someone started talking about “CSRF” for example, I wanted to know what does that actually look like in .NET terms, What are the basic principles to protect myself from it, and what (if any) built in systems in .NET are there? This 10 part article series will attempt to answer these questions within the realm of ASP.net Core. By no means will you reach the end and suddenly be immune to all web attacks, but it should hopefully help you understand the OWASP Top 10 from an ASP.net Core standpoint.

Another point to note, I’m going to base these top 10 off the OWASP 2017 Release Candidate. These have not yet been accepted as the official top 10 for 2017, and if they change I will be sure to add additional articles to cover everything off.

So without further ado, let’s get started! Our first item on the list is SQL Injection.

How Does SQL Injection Work?

SQL Injection works by modifying an input parameter that is known to be passed into a raw SQL statement, in a way that the SQL statement executed is very different to what is intended. That might sound like a whole lot of mumbo jumbo, so let’s take a working example.

If you would like the entire working code project on your machine for this post, you can grab everything from Github here. You can still follow along without the code as I’ll post code samples and screenshots the entire way through.

First let’s create a database with two tables. The first table titled “NonsensitiveDataTable” contains some data that we don’t mind sharing out to the user. The second table, aptly titled “SensitiveDataTable” contains users credit cards and social security numbers. Admittedly, a little extreme but bare with me for a bit. Here is how the tables look in SQL Server.

Now let’s say we have an API endpoint. All it does is take a parameter of an “Id”, and we use this to request data from our NonSensitiveDataTable.

This is a rather extreme example and coded rather poorly, but it illustrates the point. Hitting this endpoint to get data, we seem to have no issue.

But just looking at the code we can see something suspect. Notably this line here :

We are passing the Id parameter directly into our SQL Statement. People could type anything in there right? Well, we know that the record of “Mark Twain” is returned when we request the ID of 2. But let’s try this :

Wow…. So why does adding the test “OR id = 1” suddenly change everything. The answer lies in our code above. When we simply pass the id of “2” to our code, it ends up executing this statement.

But when we pass in our OR statement, it actually ends up reading like so :

So here we have managed to modify what SQL statements are getting executed simply by changing a query parameter. But no big deal right? The user will get the wrong data and that’s the end of the that. Well, since we know we can modify the SQL statement. Let’s try something funky. Let’s try this URL :

Uh oh… That doesn’t look good. The query that ran ended up looking like this :

It says, find me the NonSensitiveDataTable where the ID is 999, and while you are at it, link it up to the data from the SensitiveDataTable. Because there was no record with the ID of 999, we jumped immediately to blurting out the SensitiveDataTable, in this case leaking out credit card info.

Now the first thing you are going to ask is… Do I not need to know that the “SensitiveDataTable” exists before I go and run this query. The answer is yes and no. Quite often you will find guesswork can do a tonne for you. If this is an eCommerce site the changes are they will have a “Customers” table and a “Orders” table etc. SQL Injection isn’t as always as easy as copy and pasting in a URL and suddenly you have the keys to the kingdom.

However, let’s try one more thing. Let’s try to get the following query going. It will tell us table names that are in the database.

There you have it, a little less guesswork. We could actually sit there for days throwing commands in here. You will often find SQL Injection “cheatsheets” out on the web with a whole heap of SQL commands to try out. Often with SQL Injections it’s a case of throwing things against the wall and seeing what sticks.

I’ll leave you with one final query you “could” try and run. Let’s say you get frustrated trying to grab data, and you just want to ruin someone’s day. Imagine running the following command :

Here we have just given up, and we are just going to drop a table. Maybe we even grabbed all the data we could, and we just want to make life miserable for the resident DBA. This would be one way to do it for sure!

Now we’ve seen what an SQL Injection looks like, let’s get on to protecting ourselves!

Sanitize Your Inputs

You will quite often find that the first comment on any article describing a recent SQL Injection attack is to “Always sanitize your inputs” (Quickly followed by “Using parameters in your queries is better!” – but more on that later).  Sanitizing your inputs could mean a couple of things, so let’s run through them.

Casting To A Non String Type

In our example code, Because we know our Id is supposed to be an int. Let’s just case it to one.

Perfect! Now if someone tries to add further info to our query string, it’s not going to be parsed to an int and we will be saved!

An even easier example in ASP.net Core would of course just allowing our routing do the work for us.

Now we don’t even need to do any manual parsing. ASP.net Core routing will take care of it for us!

But of course this only works if your query actually is against an integer column. If we really do need to pass strings, what then?

Whitelist/Blacklist/Character Replacement

Honestly, I don’t want to go too deep into this one because I think it’s a pretty error prone way to deal with SQL Injection. But if you must, you can run strings against a whitelist or blacklist of known good values before passing to SQL. Or you can replace all instances of certain characters (Quotes, semicolons etc). But all of these rely on you being one step ahead of an attack. And you pitting your knowledge of the intricacies of the SQL language against someone else.

While languages like PHP have inbuilt functions for “escaping” SQL strings, .NET core does not. Further, simply escaping quotes does not protect you from the root cause of SQL Injection anyway.

Sanitizing your inputs requires you to “remember” to do it. It’s not something that becomes a pattern to follow. It works. But it’s not the ideal solution.

Parameterized Queries

Using Parameters in your SQL queries is going to be your go-to when defending against SQL Injection attacks. Even if you are using an ORM of some sort (Talked about a bit below), behind the scenes chances are they will be using parameterized queries.

So how can we use them in our example project? We are going to change things up a bit and instead query on the “name” field of our NonSensitiveDataTable. Just because it affords us a little bit more flexibility.

Can you see where we have added parameters to our query? So how does this work? Well, with the aid of the SQL Profiler, the actual SQL that is being sent looks like this :

Wow… That’s a heck of a lot different. So what’s going on here? Essentially we are sending the query, but saying “Later on, I will tell you what data to query against”. We pass the exact value we want to query against outside the actual SELECT statement. In this way, our original query is kept intact and it doesn’t matter what a user types into the query string.

The best thing about using Parameters is that they become an easy pattern to follow. You don’t have to “remember” to escape certain strings, or “remember” to use a whitelist. It just works.

Stored Procedures

SQL Stored Procedures are another good way of avoiding SQL Injection attacks. Though stored procedures seem to be out of favour with developers these days, they work similarly to parameterized queries in that you are passing your SELECT statement and your query data in two different “lots” if you will. Let’s quickly create a stored procedure to try out (If you are using the example project from GIT, you already have this SP in your database and you don’t need to run the following).

Let’s create an API endpoint that will run this.

When we hit this endpoint with the SQL Profiler running, we can see the following getting run.

As you can see, it’s very similar to our parameterized query above, where the actual details of our query are sent separately. While it’s rare to see an entire project built around stored procedures these days, they do protect you against SQL Injection attacks.

Use Of An ORM

Now this next part is an interesting part of mitigating SQL Injection attacks. Because in one sense, it becomes a one stop shop for protection. But the reason I’ve put it last is because I think it’s better to understand what’s going on under the hood of an ORM that protects you, and that’s typically parameterized queries.

If you use something like Entity Framework. When you run a Linq query to fetch your data, any linq “Where” statement will be packaged as a parameter query and sent to SQL server. This means you really need to go out of your way to open yourself up to SQL Injection, however it’s not impossible! Almost all ORM’s are able to send raw SQL queries if you really want to. Take a look at this article from Microsoft on sending Raw SQL through Entity Framework Core here. At the very least, using an ORM makes SQL Injection the “default” if you will, rather than something extra added on top.

Lowest Possible Permissions

Remember our drop table command? Just incase you forgot, here it is below :

Will our website ever actually need to drop a table? Unlikely. And yet in our scenario we’ve given it the keys to the kingdom to do whatever it likes. Giving fine grain permissions or the lowest possible permission level is our final step to “stopping” SQL Injection attacks. I’ve put “stopping” in quotes because in reality we are still vulnerable to sensitive data breaches, but our attacker atleast cannot drop tables (They can however still run delete commands!).

Most SQL systems (MYSQL, Postgres, MSSQL) have inbuilt SQL Roles that allow just simple reads and writes to get through, but not to modify the actual table schema. These roles should be used as much as possible to limit any possible attack.

Summary

As we’ve seen, SQL Injections can be pretty brutal about leaking sensitive data, and even destroying it. But we’ve also seen that ASP.net Core already has ways we can protect ourselves. While relying on input parsing/type conversion will get us part of the way there, parameterized queries are really our answer. And best of all, there are very few projects these days where you won’t be using an ORM which gives us protection right out of the box.

In our next article in the series, we will be tackling the pretty broad topic of “Broken Authentication and Session Management”. It’s a pretty open ended topic and is more about “practices” related to authentication and sessions rather than a straight forward “Here’s the issue”. I’ll see you then.

 

Raygun is a super easy to use monitoring tool. If you’ve ever used New Relic, it’s not too dissimilar, but focuses a bit more on crash reporting and error logging. They currently have a 14 day trial to sign up, and best of all, they have an awesome Nuget package for integrating your ASP.net Core app. Super easy!

The Basics

To get up and running will only take a few lines of code!

First install the following Nuget package in your package manager console :

Next, go into your appsettings.json and add in a setting called “RaygunSettings” and inside that put “ApiKey”. This should all be at the root level so for example my complete appsettings looks like the following :

After this, head over to your startup.cs. In your ConfigureServices method, add a call to “AddRaygun” and pass in your Configuration object.

Then in the Configure method, add in the Raygun middleware. Note that this should be early on in the pipeline (If not the very first thing in your pipeline). Remember that Middleware is run in order! If you place the Raygun middleware after the call to UseMvc for example, you will not log anything should an error be thrown inside the Mvc Middleware. It should end up looking a bit like this :

Next we head over to our controller and just throw a test exception inside an action to see how we get on.

When we load this in our browser and head over to the Raygun dashboard, wallah!

Manually Logging Exceptions

Now the above is great when you are just trying to catch any old unhandled exception, but there may be cases where you want to log an exception in code that you have caught and you don’t want to crash your entire application. For that, Raygun allows you to manually log exceptions.

Firstly, to log any exception you can do the following basic line of code anywhere you want :

So if you are in a pinch and just want to get things going, there we go! Done!

One important thing to note is the “SendInBackground”, this should not be done if your code looks something like this :

Because the throw will cause the application to crash and possible close threads before the message is sent. For that you can use the Sync version which is just “Send”. On top of that, if you are using the Raygun middleware anyway, then you shouldn’t be rethrowing as you will once again log the exception further down the middleware. In simple terms, this should only be used for “handled” exceptions.

In any case, the fact we are “new-ing” up things makes things a little hard to unit test in our code. So let’s create a nice wrapper for ourselves. First, let’s create a “Settings” object that can hold any configuration we might need for the logger. While this is only going to hold an API key at first, it’s an awesome pattern to use as it allows you to easily add new configuration options further down the line without having to mangle the constructor of our logger.

Next create an interface called “IRaygunLogger”. This will be pretty simple at first and will only need to expose a “LogException” method.

Now let’s create an implementation of this interface called RaygunLogger. Inside this we will actually new up everything to send the exception to Raygun.

Now the final piece of the puzzle in setting this up is to to register all these classes in our DI container. The RaygunLoggerSettings should be registered as a singleton (As the settings will never change) and we can pull the API key direct from the configuration for this. Our RaygunLogger should be a transient instance. It would look something like this :

Now it’s just about injecting our logger where we need it and logging an exception. Below is a controller that has been setup to log a simple caught exception. Now we can actually run tests through this code testing how we handle exceptions etc. Awesome!

And when we give it a whirl?!

I can’t think to the last ASP.net project I worked on that didn’t involve Automapper in one way or another. In some ways, it’s become a bit like JSON.net where it’s just a defacto package to install when you start a new project. Automapper has, over time, been moving away from it’s static class origins into a more dependency injection type interface. This of course then lends itself perfectly for ASP.net Core’s new Service Collection framework!

This post will go over getting things up and running in ASP.net Core, but it does presume you already have quite a bit of knowledge about Automapper already. Let’s jump right in!

Setup

Inside your project, install the following Nuget package from the Package Manager console :

This will also in turn install the Automapper nuget package if you don’t have it already.

Inside your ConfigureServices method of your startup.cs, add a call to add the AutoMapper required services like so :

If you want to see what this call does, the actual code is open source and available here. In short, it trawls through your code finding all configuration profiles (See below), and registers the IMapper interface to be used in your classes. It is also rather particular about which scopes it will register them under – which we will also talk about shortly.

Adding Profiles

Automapper Profiles haven’t changed when you move to .NET Core. Take the example profile below where I have created a class called “DomainProfile” to add mappings, and inside the constructor setup a mapping.

When your application runs, Automapper will go through your code looking for classes that inherit from “Profile”, and will load their configuration. Simple!

Using IMapper

Once you’ve loaded up your profiles, using the IMapper interface is exactly the same if you used Automapper with any other DI Framework. You inject in IMapper to your constructor of your class, and are then able to map objects.

Again, it may look like a bit of black magic since you never registered IMapper in the ServiceCollection. But the Nuget package and the call to “AddAutoMapper” in your ConfigureServices takes care of all of this for you.

Scopes

One final thing to end on is how scopes work inside Automapper. Great care has been given so that you can write custom resolvers that rely on things like repositories that may be of transient scope. Confused? Let’s break it down a bit more.

Your configuration (e.g. Automapper Profiles) are singletons. That is, they are only ever loaded once when your project runs. This makes sense since your configurations will not change while the application is running.

The IMapper interface itself is “scoped”. In ASP.net terms, it means that for every individual request, a new IMapper is created but then shared across the entire app for that whole request. So if you use IMapper inside a controller and inside a service for a single request, they will be using the same IMapper instance.

But what is really important, is that anything that derives from IValueResolver, ITypeConverter or IMemberValueResolver will be of transient scope. They will also be created using the .NET Core Service Collection DI. Take the following code that may be used to resolve a particular mapping whereby I want to map across a username based on the source models “Id”. To do this I want to access my UserRepository.

Because this is instantiated as a transient instance using .NET Core’s Service Collection, my repository will also be resolved correctly and I will be able to do mappings that are more than just mapping across simple values. Super handy!

Have I missed anything? Drop a comment below and let me know!

In a previous post, I talked about getting Cookie Authentication up and running in ASP.net Core 1.X. In ASP.net Core 2.0, there has been a couple of changes to the API that are pretty easy to trip up on. Most of the changes are just a simple naming difference, but it can be pretty infuriating following a tutorial where one word trips you up! So let’s go!

Firstly, remember that this tutorial is for adding authentication to your app when you don’t want to use the out of the box “identity” services provided by ASP.net Core. It’s a very simple way to provide authentication cookies based on any logic you want, with any backing database you want.

Setup

If you are creating a new project, ensure that the “Authentication Type” when creating the project in Visual Studio is set to No Authentication. The other authentication types you can pick revolve around the inbuilt identity service, but since we are looking to do something more custom we don’t want this.

With your project up and going. In your nuget package manager, install the following package :

In the configure method of your startup.cs. Add the following line. Note that it should always come above your call to “UseMVC”, and likely above any other middleware calls that will return a result. Middleware runs in order, so you obviously want authentication to kick in before your MVC pipeline does!

In the ConfigureServices method of your startup.cs, you need to add the Cookie Authentication services with their configuration. It should look pretty similar to the following :

Where the Login Path is the URL to your login page. Remember this page should be excluded from authorization checks!

Disabling Automatic Challenge

Microsoft has decided that any time you access an endpoint that should be authorized, you should be redirected to the login page regardless. In ASP.net Core 1.X, this was not the case. You could set a flag called “AutomaticChallenge” to false. If you do not wish to always redirect the user (e.g. You would prefer to simply return a 401 response code – a Web API using shared Cookie Authentication is a good example where this would be relevant), you can override the redirect logic like so :

This overrides the redirect logic and instead returns a simple 401. There are other events you may wish to override at the same time (For example the UnAuthorized redirect etc).

Registering A User

In exactly the same manner as Cookie Authentication in ASP.net Core 1.X, registering a user will be entirely on you and will live outside any authentication code provided out of the box by Microsoft.

Logging In A User

For logging in a user, let’s create a quick model.

Now let’s create a controller called “Account” and create a POST method that accepts our login.

If you’ve followed our previous tutorial on Cookie Authentication in ASP.net Core 1.X, then this should look pretty familiar. And that’s because it’s the exact same code with one notable exception. This line here :

In ASP.net Core 1.X is actually

So that’s the only difference here.

Authorizing Controllers

This stays the same in ASP.net Core 2.0. You can simple add the “Authorize” attribute onto any controller or action.

Logging Our A User

Logging out a user stays fairly simple with one small change. The line to sign a user out inside an action is now

Previously in ASP.net Core 1.X it was :

With the release of .NET Core 2.0 comes a large “meta package” with the name Microsoft.AspNetCore.All. It’s a sort of god mode package that contains all you need to get up and running on ASP.net Core without having to figure out which nuget package does what. But it does have some massive pitfalls in my opinion.

Why Have A Meta Package At All?

When ASP.net Core is split into multiple smaller packages, the “versions” of these packages start to get all out of sync over time. This is especially true if you have ever tried to upgrade a project between .NET core versions, you end up having to go through all your smaller AspNetCore packages and work out which ones you should be version bumping. Take the following for example :

Here you have many aspnetcore and EntityFramework packages that all have slightly off versions. In ASP.net Core 2.0, using the “Microsoft.AspNetCore.All” meta package, your package references instead look like this :

What this should mean is that by updating a single package, you will be able to run on the latest ASP.net Core version. It also makes your csproj a bit easier on the eye with a single package being your “framework”, and any other package references being ones you have added explicitly.

What’s Inside The Microsoft.AspNetCore.All package

The meta package contains :

  • Every single AspNetCore package published by Microsoft
  • Every EntityFrameworkCore package published by Microsoft
  • Any supporting packages required for the framework to run (For example Json.Net)

Now, that probably sounds huge and you will probably ask yourself,  won’t the resulting deployment be out of control? You would essentially be “shipping” the entire framework right? Well..

Deployment and Runtime Store

Microsoft have gone back to something called the “Runtime Store“. I say the words “gone back” because this functions almost the same as the GAC on .net Full Framework projects. That is, your target machine would have to have the .NET Core runtime installed on it for you to be able to use the aspnetcore meta package. When you deploy, you won’t actually deploy any of the framework packages, your code will instead utilize the ones already on the machine.

And if the machine doesn’t have the runtime installed? Well, that’s where things start to fall down. If you are looking to do self contained deployments (Which was a massive selling point of .NET Core), then you cannot use the meta package. You would need to reference packages manually. I can see how this could turn into a headache down the road because it makes the creation of the project and whether you use the meta package so important. Imagine getting to the point you are ready to deploy, and your dev ops team prefers going down a self contained deployment route. Now you will have to go back and work out which packages you are using from the meta and add them all in manually. Painful!

Getting Started

By default, when you create any project using ASP.net Core 2.0, you will be using the meta package. If you know you won’t be using self contained deployments any time soon, then it’s safe to go ahead and use, and really, it does alleviate some headaches of knowing which packages have what. I can’t tell you how many times I start a new project and I have to go back and manually add in the packages for authentication, entityframework, staticfiles etc. Give it a go and let me know what you think below!

In a previous post, I wrote about how there was no way to send email on .NET Core. In version 1.0 of the framework and 1.6 of the standard, the SMTP client code in .NET was not yet ported over , that is until the release of .NET Core 2.0.

With things ported over, the interfaces and classes are virtually identical to ones you might have used in the full framework. Consider the following written in .NET Core 2.0.

For the most part, if you had code that could send email via SMTP in the full framework, it’s likely a matter of a copy and paste job to get it going in .NET Core now!

If you are having issues with this, ensure that you are on .NET Core 2.0 framework. You can check this by editing your csproj file, it should look like the following :

Where TargetFramework is set to 2.0. Anything lower and you will not have access to the SmtpClient class!

One of the easiest ways to publish your application to Azure App Service is straight from Visual Studio. While not an ideal solution long term or for any team over the size of one, it’s a great way for a solo developer to quickly get something up and running. Most of all, it doesn’t rely on any special configuration, scripts or external services.

Publish via Login

If the Azure account you are publishing to is the same one you use for your Visual Studio account or you have the username/password combination of the Azure account you are publishing to, you can publish direct from Visual Studio.

Inside solution explorer, right click your project and select “Publish”. You want to select “Azure App Service” on the next screen and “Select Existing”. The “Create New” option can be used if you don’t already have an App Service instance already (Rather than creating via the portal) – but we won’t do this this time around.

Click “Publish”. On the next screen you can select your Account, App Service and Deployment Slot to push to. Click “OK” and… that’s it. Really. You should see your project start building, publishing (via Web Deploy), and then Visual Studio will open your browser at the App Service URL.

Publish via Publish Profile

A publish profile is a settings file that holds all information for deploying your code direct to Visual Studio. The only real difference is that the publish profile can be given to you without you having any Azure login credentials yourself. Other than that, the actual deployment is largely the same.

To get a publish profile from the Azure Portal, head over to the dashboard of your App Service. Along the top you should see an option to “Get Publish Profile”.

With this file you are now able to deploy direct from Visual Studio. It’s important that this file is not just left lying around somewhere. With it, anyone can push code to your app service. So while it’s not quite the keys to the kingdom as much as a user/pass combination is, it’s pretty damn close.

Inside Solution Explorer, Right click your project, select Publish. Select “Import Profile” (You may have to scroll to the right to see this option).

Hit Publish! Immediately your project will build, publish and then open your browser at the App Service URL.

Obvious Issues

I touched on it earlier, but I feel the need to point it out further. Publishing from Visual Studio is not a long term strategy. Publishing from Visual Studio cuts out any CI/CD process you have running (If any), it creates possible inconsistencies with what you are deploying (Think “Did you get latest before you published?) and it creates a single point of failure of one developer doing all deployments in teams of more than 1.

That last point is important, while documentation can get some way around this, a single developer being the “release” guy usually creates some “magic” process where only that person can push code.

Again, with a single developer, especially when it’s just a hobby project, it’s less of an issue.

 

Deploying to the Azure App Service through Github is a simple but effective way to deploy websites that don’t require too much ceremony. Because many projects use Github for hosting their Git repositories, it’s a pretty seamless deployment process that is more around point and clicking in the Azure portal rather than having to write complicated deployment scripts.

It should be noted while this post references Github throughout, the exact same process can be used to setup Bitbucket to automatically deploy on a push.

Setting Up Github Deployment In Azure Portal

Setting up a Github Deployment for an Azure Web App Service is nothing more than a point and click process. Inside your App Service on the Azure Portal, select Deployment Options from under the Deployment category.

Select “Setup” on the online menu, and then select your source. In our case either Github or Bitbucket. You should now be able to click “Authorize” and use OAuth to connect your Github/Bitbucket account to Azure. Note that this account will be available for all other Azure App Services, not just this one.

Select your project and branch from the options. Typically your branch can just be master, but if this is a development instance you may want to use your develop branch, or you can even select a particular release branch if you are just using this deployment method as an easy way to push code rather than a continuous deployment strategy.

After connecting, wait 5 minutes for your project to sync up. Note that sometimes you need to go to a different blade in the portal and then head back to see updates (Not the greatest, but it works eventually!)

A great feature is that the deployment engine behind this will only sync files that are changed (Since it can see what has changed inside GIT). This makes deployments much faster than a FTP transfer or similar because it’s only uploading what it needs.

Any pushes to your selected branch will be deployed automatically within seconds, but if you get impatient, you can click the Sync button inside the Deployment Options blade and you should get a deployment going within seconds.

Rollback A Deployment

The Github CD Process allows you to roll back any deployment right from inside the Azure portal. Head to the Deployment Options screen inside your Azure App Service and select the deployment you wish to roll back to.

Select the option to “Redeploy” along the top menu and your previous deployment will be redeployed to production.

Linux App Service

If you are attempting to use Azure App Service with Linux (And now is a good time to be doing so, it’s 50% off!), unfortunately I couldn’t get Continuous Deployment working. Using the exact same Github repository but with a Linux App Service, the deployment failed. I have a sneaky suspicion that is related to the Docker Settings on App Service, but I wasn’t able to get things up and running. Feel free to drop a comment below if you have got it up and running!

With Azure App Service currently being 50% off for Linux, people are taking advantage and halving their .NET core hosting bill. And why wouldn’t you!? Because Azure App Service is fully PaaS, the underlying OS doesn’t make too much of a difference both from a deployment and management perspective.

Most deployment guides floating around the web center on deploying to a Windows App Service, which really isn’t too different to the Linux version, except for one important step. If you have deployed your ASP.net Core code and are not seeing anything when you visit your website in a browser, read on.

Docker Startup File

Linux App Service actually runs Docker under the hood. To get everything starting correctly you actually need to set a “startup file”. This is essentially the entry point for your app.

Inside the Azure Portal, inside your App Service, under the Settings category, find the option for “Docker Container”. Here is where you can set the runtime stack, but more importantly, your startup details.

For a dotnet project, the startup file line should read :  dotnet /home/site/wwwroot/{yourdllhere}

Where yourdllhere is the DLL of your main web project in your deployment. In practice it should end up looking a bit like this :

Once saved, you should head back to the app service dashboard and restart your service. It does take a while to boot up first time, so wait 5 minutes and give it a spin!