It was only a couple of years ago, that I learned about Debugger.Launch(), and since then, I’ve used it on many an occasion and thought “How did I ever live without this!”. It’s just such a little miracle tool when working with applications that have complex startup code that can’t be debugged easily.

Just the other day, while remembering this beauty of a function, I went back and looked at documentation for when this was released. After all, I probably went a good 5 or 6 years developing in .NET without ever using it.

My jaw almost hit the floor!

You’re telling me, that this has been in the .NET Framework since the dawn of time, and I’ve only just found out about it! UGH!

What Is Debugger.Launch?

Let me give a scenario for you. You are running an application (Such as a web application or windows service), that has startup methods you want to debug. Often this will be things like dependency injection setup, early config file reads or similar. For whatever reason, you can’t just hit F5 and start debugging. You need to run the application, then attach a debugger later. For web applications this is sometimes because you are using IIS even in development, and hitting a URL to test your application. And for things like Windows Services, you want to debug when it’s actually running as a Windows Service.

Now back in the day, I used to do this :

//Added in the startup code section
Thread.Sleep(10000); //Give myself 10 seconds to attach a debugger

Basically sleep the application for 10 seconds to allow myself time to attach a debugger. This kind of works. But it’s not exactly a strict science is it? If I attach early, then I’m left sitting there waiting out the remainder of the sleep time, and if I attach late, then I have to restart the entire process.

And that’s where Debugger.Launch() comes in :

//Added in the startup code section
System.Diagnostics.Debugger.Launch(); //Force the attachment of a debugger

You’re probably wondering how exactly does a debugger get “forced” to attach. Well consider the following console application :

using System;

System.Diagnostics.Debugger.Launch();
Console.WriteLine("Debugger is attached!");

Imagine I build this application, and run it from the console (e.g. Not inside Visual Studio). I would then see the following popup :

Selecting Visual Studio, it will then open, and start debugging my application live! Again, this is invaluable for being able to attach a debugger at the perfect time in your start up code and I can’t believe I went so long in my career without using it.

How About Debugger.Break()?

I’ve also seen people use Debugger.Break(), and I’ve also used it, but with less success than Debugger.Launch().

The documentation states the following :

If no debugger is attached, users are asked if they want to attach a debugger. If users say yes, the debugger is started. If a debugger is attached, the debugger is signaled with a user breakpoint event, and the debugger suspends execution of the process just as if a debugger breakpoint had been hit.

But that first sentence is important because I find it less reliable than Launch. I generally have much less luck with this prompting a user to add a debugger. However! I do have luck with this forcing the code to break.

When a debugger is already attached (e.g. You attached a debugger at the right time or simply pressed F5 in Visual Studio), Debugger.Break forces the code to stop execution much like a breakpoint would. So in some ways, it’s like a breakpoint that can be used across developers on different machines rather than some wiki page saying “Place a breakpoint on line 22 to test startup code”.

It probably doesn’t sound that useful, except for the scenario I’m about to explain…

When Debugger.Launch() Doesn’t Work

In very rare cases, I’ve been stuck with Debugger.Launch not prompting the user to debug the code. Or, in some cases, me wanting to debug the code with an application not presented within the popup. There’s actually a simple solution, and it almost goes back to our Thread.Sleep() days.

Our solution looks like :

//Spin our wheels waiting for a debugger to be attached. 
while (!System.Diagnostics.Debugger.IsAttached)
{
    Thread.Sleep(100); //Or Task.Delay()
}

System.Diagnostics.Debugger.Break();
Console.WriteLine("Debugger is attached!");

It works like so :

  • If a debugger is not attached, then simply sleep for 100ms. And continue to do this until a debugger *is* present.
  • Once a debugger is attached, our loop will be broken, and we will continue execution.
  • The next call to Debugger.Break() immediately stops execution, and acts much like a breakpoint, allowing us to start stepping through code if we wish.

Now again, I much prefer to use Debugger.Launch, but sometimes you can’t help but do a hacky loop to get things working.

Another extension of this is to wrap the code an IF DEBUG statement like so :

#if DEBUG
//Spin our wheels waiting for a debugger to be attached. 
while (!System.Diagnostics.Debugger.IsAttached)
{
    Thread.Sleep(100); //Or Task.Delay()
}

System.Diagnostics.Debugger.Break();
#endif
Console.WriteLine("Debugger is attached!");

This means that should this code make it into production, it doesn’t just spin it’s wheels with no one able to work out why nothing is running. In my opinion however, any Debugger functions should not make it into checked in code.

Using these tools, you can now debug code that you once thought was impossible to do.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It’s somewhat surprising in the 20 years .NET has been out, there hasn’t been an official implementation of a Priority Queue. It hasn’t stopped people hacking together their own Priority Queues, and indeed, even Microsoft has had several implementations of priority queues buried internally in the framework, but just never exposed for the public. Finally, Microsoft has come to the party and implemented an official Priority queue in .NET 6. Yes, .NET 6.

If you were coming here because you wanted an implementation for .NET Core, .NET 5, or even .NET 4.6.X, then unfortunately you are out of luck. There are implementations floating around the web, but slowly these will go away with the official .NET Priority Queue coming to the framework.

If you are new to .NET 6 and want to know what you need to get started, check out our guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

What Is A Priority Queue?

Before we get started, it’s worth talking about what exactly a Priority Queue is. A Priority Queue is a Queue, where each item holds a “priority” that can be compared against other queue items. When an item is dequeued, the item with the highest priority is popped off the queue, regardless of when it was put on. So if we think of a standard queue as first in, first out (FIFO), and the stack type being last in, first out (LIFO), then a Priority Queue is.. well.. It doesn’t get a nice acronym. It’s more like, whatever in, highest priority out!

Priority can be complex as we will soon see as you can implement custom comparers, but at it’s simplest it could just be a number where the lower the number (e.g. 0 being the highest), the higher the priority.

Priority Queues have many uses, but are most commonly seen when doing work with “graph traversals” as you are able to quickly identify nodes which have the highest/lowest “cost” etc. If that doesn’t make all that much sense to you, it’s not too important. What’s really good to know is that there is a queue out there that can prioritize items for you!

Priority Queue Basics

Consider the very basic example :

using System.Collections.Generic;

PriorityQueue<string, int> queue = new PriorityQueue<string, int>();
queue.Enqueue("Item A", 0);
queue.Enqueue("Item B", 60);
queue.Enqueue("Item C", 2);
queue.Enqueue("Item D", 1);

while (queue.TryDequeue(out string item, out int priority))
{
    Console.WriteLine($"Popped Item : {item}. Priority Was : {priority}");
}

The output of this should be relatively easy to predict. If we run it we get :

Popped Item : Item A. Priority Was : 0
Popped Item : Item D. Priority Was : 1
Popped Item : Item C. Priority Was : 2
Popped Item : Item B. Priority Was : 60

The lower the integer, the higher the priority, and we can see our items are always popped based on this priority regardless of the order they were added to the queue. I wish I could extend out this bit of the tutorial but.. It really is that simple!

Using Custom Comparers

The above example is relatively easy to comprehend since the priority is nothing but an integer. But what if we have complex logic on how priority should be derived? We could build this logic ourselves and still use an integer priority, or we could use a custom comparer. Let’s do the latter!

Let’s assume that we are building a banking application. This is a fancy bank in the middle of London city, and therefore there is priority serving of anyone with the title of “Sir” in their name. Even if they show up at the back of the queue, they should get served first (Disgusting I know!).

The first thing we need to do is work out a way to compare titles. For that, this piece of code should do the trick :

class TitleComparer : IComparer<string>
{
    public int Compare(string titleA, string titleB)
    {
        var titleAIsFancy = titleA.Equals("sir", StringComparison.InvariantCultureIgnoreCase);
        var titleBIsFancy = titleB.Equals("sir", StringComparison.InvariantCultureIgnoreCase);


        if (titleAIsFancy == titleBIsFancy) //If both are fancy (Or both are not fancy, return 0 as they are equal)
        {
            return 0;
        }
        else if (titleAIsFancy) //Otherwise if A is fancy (And therefore B is not), then return -1
        {
            return -1;
        }
        else //Otherwise it must be that B is fancy (And A is not), so return 1
        {
            return 1;
        }
    }
}

We simply inherit from IComparer, where T is the type we are comparing. In our case it’s just a simple string. Next, we check whether each of the passed in strings are the word “sir”. Then do our ordering based on that. In general, a comparer should return the following :

  • Return 0 if the two items based in are equal
  • Return -1 if the first item should be compared “higher” or have higher priority than the second
  • Return 1 if the second item should be compared “higher” of have higher priority than the first

Now when we create our queue, we can simply pass in our new comparer like so :

PriorityQueue<string, string> bankQueue = new PriorityQueue<string, string>(new TitleComparer());
bankQueue.Enqueue("John Jones", "Sir");
bankQueue.Enqueue("Jim Smith", "Mr");
bankQueue.Enqueue("Sam Poll", "Mr");
bankQueue.Enqueue("Edward Jones", "Sir");

Console.WriteLine("Clearing Customers Now");
while (bankQueue.TryDequeue(out string item, out string priority))
{
    Console.WriteLine($"Popped Item : {item}. Priority Was : {priority}");
}

And the output?

Clearing Customers Now
Popped Item : John Jones. Priority Was : Sir
Popped Item : Edward Jones. Priority Was : Sir
Popped Item : Sam Poll. Priority Was : Mr
Popped Item : Jim Smith. Priority Was : Mr

We are now serving all Sirs before everyone else!

When Is Priority Worked Out?

Something I wanted to understand was when is priority worked out? Is it on Enqueue, is it when we Dequeue? Or is it both?

To find out, I edited my custom comparer to do the following :

Console.WriteLine($"Comparing {titleA} and {titleB}");

Then using the same Enqueue/Dequeue above, I ran the code and this is what I saw :

Comparing Mr and Sir
Comparing Mr and Sir
Comparing Sir and Sir
Clearing Customers Now
Comparing Mr and Mr
Comparing Sir and Mr
Popped Item : John Jones. Priority Was : Sir
Comparing Mr and Mr
Popped Item : Edward Jones. Priority Was : Sir
Popped Item : Sam Poll. Priority Was : Mr
Popped Item : Jim Smith. Priority Was : Mr

So interestingly, we can see that when I am Enqueing, there is certainly comparison’s but only to compare the first node. So as an example, we see 3 compares at the top. That’s because I added 4 items. That tells me there is only a comparison to compare the very top item otherwise it’s likely “heaped”.

Next, notice that when I call Dequeue, there is a little bit of comparison too.. To be honest, I’m not sure why this is. Specifically, there are two comparisons happening when realistically I assumed there would only be one (To compare the current head of the queue to the next).

Next time an item is popped, again we see a single comparison. And then finally, in the last 2 pops, no comparisons at all.

I would love to explain how all of this works but at this point it’s likely going over my head! That being said, it is interesting to understand that Priority is not *just* worked out on Enqueue, and therefore if your IComparer is slow or heavy, it could be running more times than you think.

That being said, the source code is of course open so you are more than welcome to make sense and leave a comment!

How Did We Get Here?

I just want to give a shout out to the fact that Microsoft does so many things with .NET out in the open. You can see back in 2015 the original proposal for PriorityQueue here : https://github.com/dotnet/runtime/issues/14032. Most importantly, it gives the community an insight into how decisions are made and why. Not only that, but benchmarks are given as to different approaches and a few explanations on why certain things didn’t make it into the first cut of the Priority Queue API. It’s really great stuff!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In the coming months I’ll be covering the new features of .NET 6, including things like MAUI (Cross platform GUI in .NET), PriorityQueue, and so much more. So I thought it would be worth talking a little bit on how to actually get set up to use .NET 6 if you are looking to touch this new tech early on.

In previous versions of .NET/.NET Core, the process to accessing preview versions was somewhat convoluted and involved installing a second “preview” version of Visual Studio to get up and running, but all that’s out the window now and things are easier than ever!

Setting Up .NET 6 SDK

To start, head over to the .NET 6 download page and download the latest SDK (not runtime!) for your operating system : https://dotnet.microsoft.com/download/dotnet/6.0

After installing, from a command prompt, you should be able to run “dotnet –info” and see something similar to the below :

.NET SDK (reflecting any global.json):
 Version:   6.0.100-preview.2.21155.3
 Commit:    1a9103db2d

It is important to note that this is essentially telling you that the default version of the .NET SDK is .NET 6 (Although, sometimes in Visual Studio, it will ignore preview versions as we will shortly see). This is important to note because any projects that don’t utilize a global.json will now be using .NET 6 (and a preview version at that). We have a short guide on how global.json can determine which SDK is used right here.

Setting Up Visual Studio For .NET 6

Now for the most part, developers of .NET will use Visual Studio. And the number one issue I find when people say “the latest version of .NET does not work in Visual Studio”, is because they haven’t downloaded the latest updates. I don’t care if a guide says “You only need version X.Y.Z” and you already have that. Obviously you need Visual Studio 2019, but no matter what anyone tells you about required versions, just be on the latest version.

You can check this by going to Help, then Check For Updates inside Visual Studio. The very latest version at the time of writing is 16.9.1, but again, download whatever version is available to you until it tells you you are up to date.

After installing the latest update, there is still one more feature flag to check. Go Tools -> Options, then select Preview Features as per the screenshot below. Make sure to tick the “Use previews of the .NET Core SDK”. Without this, Visual Studio will use the latest version of the .NET SDK installed, that is *not* a preview version. Obviously once .NET 6 is out of preview, you won’t need to do this, but if you are trying to play with the latest and greatest you will need this feature ticked.

After setting this, make sure to restart Visual Studio manually as it does not automatically do it for you. And you don’t want to be on the receiving end of your tech lead asking if you have “turned it off and on again” do you!

Migrating A Project To .NET 6

For the most part, Visual Studio will likely not create projects in .NET 6 by default when creating simple applications like console applications. This seems to vary but.. It certainly doesn’t for me. But that’s fine, all we have to do is edit our csproj file and change our target framework to net6.0 like so :

<TargetFramework>net6.0</TargetFramework>

If you build and you get the following :

The current .NET SDK does not support targeting .NET 6.0.  Either target .NET 5.0 or lower, or use a version of the .NET SDK that supports .NET 6.0

The first thing to check is

  1. Do you have the correct .NET 6 SDK installed. If not, install it.
  2. Is .NET 6 still in preview? Then make sure you have the latest version of Visual Studio *and* have the “Use previews of the .NET Core SDK” option ticked as per above.

And that’s it! You’re now all set up to use the juicy goodness of .NET 6, and all those early access features in preview versions you’ve been waiting for!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It feels like just last month .NET 6 Preview 1 was released…. In fact it was! Such is the cadence of releases with .NET now, they are getting pumped out at blazing speed. It means that each release only has marginal improvements each time, but it also means that you can grab the preview versions and actually start playing with features (and providing feedback) at a pretty crazy pace.

You can grab .NET 6 Preview 2 here : https://dotnet.microsoft.com/download/dotnet/6.0

Now as for what’s new, in short :

  • A key focus of .NET 6 is improving the developer experience by way of speeding up the feedback process of the tools you use every day. In Preview 2, that means showing off vastly improved build times across a range of projects (MVC and Blazor), in some cases cutting the build time by over half! If you are someone like me that runs a project using “dotnet watch”, then it’s great to see improvements in this area.
  • Further improvements to .NET Multi-platform App UI (Or MAUI for short), which means single project developer experiences not only across mobile platforms like Android and IOS, but even Mac OS too!
  • System.Text.Json now has a feature to “IgnoreCycles” which acts similar to the Newtonsoft.Json.ReferenceLoopHandling. This may sound like a small feature, but it’s actually been highly requested for a very long time now! For more information on what this fixes, we have a great article on it here : https://dotnetcoretutorials.com/2020/03/15/fixing-json-self-referencing-loop-exceptions/
  • Implementation of “PriorityQueue”. This is massive and a much needed enhancement! It allows developers to create a queue, but give each item in that queue a priority. On Dequeue, items with the highest priority are popped first. This will replace a tonne of custom coded “Queues” that are really un-performant lists in the background!

The full release notes as always is here : https://devblogs.microsoft.com/dotnet/announcing-net-6-preview-2/

Overall, those same minor improvements we’ve come to expect each release. But for me, the addition of the PriorityQueue is really great to see. Expect a blog post coming your way soon on the topic!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Normally when loading navigation properties in EF Core, you’re forced to use the “Include” method to specify which navigational properties to pull back with your query. This is a very good practice because it means you are explicitly saying what pieces of data you actually require. In fact, up until EF Core 2.1, there wasn’t even an option to use Lazy Loaded entities (Although if you do want to do that, we have a guide on that here : https://dotnetcoretutorials.com/2019/09/07/lazy-loading-with-ef-core/ ).

Just as an example of how you might use the “Include” method,  let’s imagine I have two classes. One called “Contacts”, and one called “ContactEmail”.

class Contact
{
    public int Id { get; set; }
    public string Name { get; set; }
    public ICollection ContactEmails { get; set; }
}

class ContactEmail
{
    public int ContactId { get; set; }
    public Contact Contact { get; set; }
    public string Email { get; set; }
}

With EF Core code first, this navigational property would be handled for us based on conventions, no problem there. When querying Contacts, if we wanted to also fetch the ContactEmails at the same time, we would have to do something like so :

_context.Contact.Include(x => x.ContactEmails)
                .FirstOrDefault(x => x.Id == myContactId)

This is called “Eager Loading” because we are eagerly loading the emails, probably so we can return them to the user or use them somewhere else in our code.

Now the problem with this is what if we are sure that *every* time we load Contacts, we want their emails at the same time? We are certain that we will never be getting contacts without also getting their emails essentially. Often this is common on one-to-one navigation properties, but it also makes sense even in this contact example, because maybe everywhere we show a contact, we also show their emails as they are integral pieces of data (Maybe it’s an email management system for example).

AutoInclude Configuration

Up until EF Core 5, you really had no option but to use Includes. That’s changed with a very undocumented feature that has come in handy for me lately!

All we need to do is go to our entity configuration for our contact, and do the following :

builder.Navigation(x => x.ContactEmails).AutoInclude();

To be honest, I’ve never really used the Navigation configuration builder, so didn’t even know it exists. And it’s important to distinguish that you cannot write AutoInclude() on things like HasOne() or HasMany() configurations, it has to stand on it’s own like above.

And.. That’s it! Now every time I get Contacts, I also get their ContactEmails without having to use an Include statement.

Ignoring AutoInclude

Of course, there are times where you opt into AutoInclude and then the very next day, you want to write a query that doesn’t have includes! Luckily, there is a nice IQueryable extension for that!

 _context.Contact.IgnoreAutoIncludes()
    .FirstOrDefault(x => x.Id == myContactId)

Here we can easily opt out so we are never locked into always having to pull back from the database more than we need!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It feels like not long ago we were talking about all the goodies in .NET 5, and here we are already jumping into .NET 6. It actually made me go back and look at the .NET Framework versions on Wikipedia here. Admittedly back then, minor versions of the framework sometimes contained huge changes. For example async/await was added in version 4.5, which these days would suggest a “minor” update, but obviously was pretty huge. But even still, version 1.0 to version 4.8 was 17 years in the making.

The first version of .NET Core was released in 2016, and here we are in 2021, just 5 years later, and we are already up to seeing preview versions of .NET Core 6. It really speaks to not only Microsoft’s commit to move fast, but I think just the overall cadence of modern software development. Gone are the days of developers sitting in cubicles and sitting on work for three years until a a major release.

You can grab .NET 6 Preview 1 here : https://dotnet.microsoft.com/download/dotnet/6.0

As for what’s new. Well generally the first preview release of a new .NET Version is typically setting the stage for what’s to come and doesn’t necessarily contain a lot of “toys” to play with. With that being said, some of the features that did make it in were :

  • The first iteration of moving Xamarin into .NET to unify the platforms. e.g. Being able to build Android/IOS applications in .NET.
  • A first crack at using Blazor for desktop applications (From early reading, this seems very close to how you might use Electron, e.g. It’s still a web control view on desktop).
  • There seems to be talk about better hot reload functionality. I can’t find that much information on this. The .NET CLI already has “dotnet watch“, but this is more of a complete build rather than a nice iterative hot reload.
  • Improvement to Single File Apps so that they actually execute from that single file rather than extracting into temp directories. This was already the case for single file applications for Linux in .NET 5, but in .NET 6, this functionality has been extended for Windows and Mac.
  • There is no appsettings.json auto complete for commonly used configuration such as logging, host filtering, kestrel setup etc.
  • WPF is now supported on ARM64.

The full release as always is here : https://devblogs.microsoft.com/dotnet/announcing-net-6-preview-1/

Overall, probably not a heck of a lot if you are a web developer. One of the major themes of .NET 6 (And even .NET 5), is unifying the various platforms and frameworks that sit under the .NET banner. In .NET 5, it was mostly desktop development, and in .NET 6, it’s a lot of mobile development with Xamarin. It doesn’t mean there won’t be something in future preview versions of course, but for now, get excited about mobile dev!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.


This post is part of a series on using Auth0 with an ASP.NET Core API, it’s highly recommended you start at part 1, even if you are only looking for something very specific (e.g. you came here from Google). Skipping parts will often lead to frustration as Auth0 is very particular about which settings and configuration pieces you need.

Part 1 – Auth0 Setup
Part 2 – ASP.NET Core Authentication
Part 3 – Swagger Setup


It’s very rare to build an API in .NET Core, and not use Swagger. After all, it’s the easiest self documenting tool available to developers, and provides a great way to test API’s without using a third party tool such as Postman. Setting up Swagger for general use is not really part of this article series, but we already have a previous article on the subject here : https://dotnetcoretutorials.com/2020/01/31/using-swagger-in-net-core-3/. If you are new to using Swagger, have a read as this piece of the Auth0 article series will cover setting up Swagger to work with Auth0, but not setting up Swagger itself!

With that out of the way, let’s jump right in.

Adding Auth0 Config To Swagger

In our startup.cs file, and inside the ConfigureServices method, we will have something similar to “AddSwaggerGen”. What we need to do is add a SecurityDefinition to Swagger. What this does is define how our API is authenticated, and how Swagger can authorize itself to make API calls. At a high level, it’s telling Swagger that “Hey, you need a token to call this API, here’s how to get one”.

The full code looks like so :

services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1",
            new OpenApiInfo
            {
                Title = "API",
                Version = "v1",
                Description = "A REST API",
                TermsOfService = new Uri("https://lmgtfy.com/?q=i+like+pie")
            });

    c.AddSecurityDefinition("Bearer", new OpenApiSecurityScheme
    {
        Name = "Authorization",
        In = ParameterLocation.Header,
        Type = SecuritySchemeType.OAuth2,
        Flows = new OpenApiOAuthFlows
        {
            Implicit = new OpenApiOAuthFlow
            {
                Scopes = new Dictionary<string, string>
                {
                    { "openid", "Open Id" }
                },
                AuthorizationUrl = new Uri(Configuration["Authentication:Domain"] + "authorize?audience=" + Configuration["Authentication:Audience"])
            }
        }
    });
});

What we are really adding is that SecurityDefinition. It’s somewhat beyond the scope of this article to really get into the nitty gritty of what each of these properties do, but this is the correct setup for Auth0. Also notice that our AuthorizationUrl is using our previous configuration that we set up to get .NET Core Authentication working.

Now move to the Configure method of your startup.cs. You need to modify your UseSwaggerUI call to look like so :

app.UseSwaggerUI(c =>
{
    c.SwaggerEndpoint("/swagger/v1/swagger.json", "API");
    c.OAuthClientId(Configuration["Authentication:ClientId"]);
});

Again, this is using a configuration variable that we set up earlier. All going well, if you open Swagger now, you should see a button saying Authorize at the top like so :

Clicking this and authenticating will redirect you back to Swagger, upon which you can make API calls that will send your bearer token.

If you get the following error :

Callback URL mismatch. The provided redirect_uri is not in the list of allowed callback URLs

It’s because you need to add your swagger URL (e.x. https://localhost:5001/swagger/oauth2-redirect.html) to the list of Allowed Callback URLs for your Auth0 application.

Now here’s where things diverge. If you are using the Authorize attribute on controllers (e.g. You have [Authorize] on top of every Controller class), then you are good to go. You should be able to tell because for each controller action inside Swagger, there will be a padlock icon indicating that authentication is required.

If you don’t see this padlock icon, it means that either you don’t have the correct Authorize attribute applied *or* you are using my method of applying Authorize globally. If it’s the former, then apply the Authorize attribute. If it’s the latter, continue reading below!

Adding SecurityRequirementsOperationFilter To Swagger

Swagger identifies which methods require authentication by looking for the [Authorize] attribute on controllers. But of course, if you are applying this globally as a convention like we mentioned earlier, this attribute won’t be there. So instead, we have to give Swagger a hand.

Add a class in your API project called “SecurityRequirementsOperationFilter”, and paste the following :

public class SecurityRequirementsOperationFilter : IOperationFilter
{
    /// <summary>
    /// Applies the this filter on swagger documentation generation.
    /// </summary>
    /// <param name="operation"></param>
    /// <param name="context"></param>
    public void Apply(OpenApiOperation operation, OperationFilterContext context)
    {
        // then check if there is a method-level 'AllowAnonymous', as this overrides any controller-level 'Authorize'
        var anonControllerScope = context
                .MethodInfo
                .DeclaringType
                .GetCustomAttributes(true)
                .OfType<AllowAnonymousAttribute>();

        var anonMethodScope = context
                .MethodInfo
                .GetCustomAttributes(true)
                .OfType<AllowAnonymousAttribute>();

        // only add authorization specification information if there is at least one 'Authorize' in the chain and NO method-level 'AllowAnonymous'
        if (!anonMethodScope.Any() && !anonControllerScope.Any())
        {
            // add generic message if the controller methods dont already specify the response type
            if (!operation.Responses.ContainsKey("401"))
                operation.Responses.Add("401", new OpenApiResponse { Description = "If Authorization header not present, has no value or no valid jwt bearer token" });

            if (!operation.Responses.ContainsKey("403"))
                operation.Responses.Add("403", new OpenApiResponse { Description = "If user not authorized to perform requested action" });

            var jwtAuthScheme = new OpenApiSecurityScheme
            {
                Reference = new OpenApiReference { Type = ReferenceType.SecurityScheme, Id = "Bearer" }
            };

            operation.Security = new List<OpenApiSecurityRequirement>
            {
                new OpenApiSecurityRequirement
                {
                    [ jwtAuthScheme ] = new List<string>()
                }
            };
        }
    }
}

This looks a bit over the top, but actually it’s just telling Swagger that unless it sees an “AllowAnonymous” attribute on an action or a controller, that we can assume it’s supposed to be authenticated. It’s essentially flipping things on it’s head and saying everything requires authentication unless I say so.

Now back in our ConfigureServices method of our startup.cs, we can go :

services.AddSwaggerGen(c => 
{
    //All the other stuff. 
    c.OperationFilter<SecurityRequirementsOperationFilter>();
});

Which will of course add in our new filter to our swagger docs. This means that now, when we use Swagger, by default, all actions will require a JWT token. Perfect!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.


This post is part of a series on using Auth0 with an ASP.NET Core API, it’s highly recommended you start at part 1, even if you are only looking for something very specific (e.g. you came here from Google). Skipping parts will often lead to frustration as Auth0 is very particular about which settings and configuration pieces you need.

Part 1 – Auth0 Setup
Part 2 – ASP.NET Core Authentication
Part 3 – Swagger Setup


Now that we have our Auth0 tenant all set up, it’s time to actually start authenticating users on our API, and validating their JWT tokens. Let’s go!

Setting Up Auth0 With ASP.NET Core Authentication

The first thing we need to do is install the Microsoft Nuget package that validates JWT tokens for us. So from our Package Manager Console we can run :

Install-Package Microsoft.AspNetCore.Authentication.JwtBearer

Next, head to our startup.cs file, and inside our ConfigureServices method, we will add the following :

services.AddAuthentication(options =>
{
    options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
    options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
    options.Authority = Configuration["Authentication:Domain"];
    options.Audience = Configuration["Authentication:Audience"];
});

This sets up our JWT authentication to be validated against Auth0. When I did all of this for the first time I thought “I must be missing something here…”. But really that’s it. Any JWT token is validated against Auth0 using the configuration we set up earlier. Too easy!

Next, in our Configure method, we need two additional calls in our pipeline :

app.UseAuthentication();
app.UseAuthorization();

Ordering is important! The call to Authentication must happen before the call to Authorization. Authentication is the act of “authenticating” who someone is, and essentially storing a validated identity against that request. Authorization is the act of authorizing a user against a resource. If you have not authenticated (e.g. Logged in), then how can you be authorized?

The overall order within this method is important too. You should obviously authenticate before you make a call to a controller etc.

Adding Authorize Attribute

To require your controllers to have a logged in user, we must go and place the “Authorize” attribute on each controller like so :

[Authorize]
public class ContactController : ControllerBase
{
}

However, there are a couple of problems with this :

  • You now have to go back and back-add it to all controllers.
  • What if a new controller is added, and someone forgets to add this attribute.

That last point is I think the most important. What we want to do is reverse the Authorize attribute to be opt-out, not opt-in. By default, everything should be locked down to logged in users. Luckily there is a way for us to do just that.

In your startup.cs, inside your ConfigureServices method, you should have a call to “AddControllers” or similar like so :

services.AddControllers()

However, you can also use this call to add in filters that are applied globally, without you having to add the attribute manually to each controller. To do that with our Authorize attribute, we do the following :

services.AddControllers(options =>
{
    var policy = new AuthorizationPolicyBuilder()
        .RequireAuthenticatedUser()
        .Build();

    options.Filters.Add(new AuthorizeFilter(policy));
})

Now the AuthorizeFilter is added globally for every controller within our solution!

Of course the next question will be, what if you want a controller to opt out? We can just use the AllowAnonymous attribute like so :

[AllowAnonymous]
public class AnonController : ControllerBase
{
}

Testing ASP.NET Core Authentication

At this point, your API is actually all set up to authenticate against JWT tokens. In the next step, we are going to talk about how to wire up Swagger to allow you to generate valid test tokens within the Swagger interface. But if you can’t wait that long, or you don’t use Swagger, then you can actually generate test tokens right from Auth0 itself.

Inside the Auth0 Dashboard, select “APIs” from the left hand menu, open the settings for your API and go to the “Test” tab. There, the second box actually contains a valid JWT token that you can use for testing. It’s generated each time you load this page, so it’s good to go immediately. Feel free to test your API at this point with the JWT token here, and validate that everything is set up correctly.

Next Steps

Theoretically, our API is now secured using Auth0. But in 99% of my projects, I use Swagger to test against my API. For that, I want to be able to generate a valid Auth0 JWT token to use for testing, without having to log into Auth0 or use Fiddler on my front end application to intercept a valid token. The next part in our series will investigate doing exactly that : https://dotnetcoretutorials.com/2021/02/14/using-auth0-with-an-asp-net-core-api-part-3-swagger/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently had to set up a new project using Auth0 as an “Identity As A Service” provider. Essentially, Auth0 provides an authentication service using an OAuth2 flow, meaning I don’t have to store passwords, worry about passwords resets, or implement my own two factor authentication. Everything about authenticating a user is handled by Auth0, it’s great!

What’s not great is their documentation. I’ve had to use Auth0 (And Azure AD B2C) in a tonne of projects over the years. And every time, I’m reminded that their documentation just plain sucks. At a guess, I think it’s because you only do it once. So if you set up Auth0 for your product, you’re only doing that once and you’ll never have to do it again. So any pains in the documentation you quickly get over. Except if you’re me! Because I work across a whole range of projects on a contract basis, I may do a new Auth0 setup up to 3 – 4 times per year. And every time, it’s painful.

In this series, I’m going to show you how to authenticate your API using Auth0, from setting up your Auth0 tenant all the way to setting up Swagger correctly. It will serve as a great guide if it’s your first time using Auth0, and for those more experienced, it will provide a good run sheet every time you have to set up a new tenant.


This post is part of a series on using Auth0 with an ASP.NET Core API, it’s highly recommended you start at part 1, even if you are only looking for something very specific (e.g. you came here from Google). Skipping parts will often lead to frustration as Auth0 is very particular about which settings and configuration pieces you need.

Part 1 – Auth0 Setup
Part 2 – ASP.NET Core Authentication
Part 3 – Swagger Setup


Creating An Auth0 API

The first thing we need to do is create a new “API” within the Auth0 dashboard. From Auth0, click the APIs menu item, click “Create API” and fill it in similar to the following :

The Name field can be anything, and is purely used within the portal. This might be useful if you have multiple different API’s that will authenticate differently, but for the most part, you can probably name it your product.

The “Identifier” is a little more tricky. It plays a similar role to the above in that it identifies which API is being authenticated for, but… Again, if you have one API it’s not too important. I typically do https://myproductname. It does not have to be a URL at all however, but this is just my preference.

Leave the signing algorithm as is and hit Create!

Copy the Identifier you used into a notepad for safe keeping as we will need it later.

Creating Your Auth0 Application

Next we need to set up our Auth0 Application. An application within the context of Auth0 can be thought of as a “solution”. Within your solution you may have multiple API’s that can be authenticated for, but overall, they are all under the same “Application”.

By default, Auth0 has an application created for you when you open an account. You can rename this to be the name of your product like so :

Also take note of your “Domain” and “ClientId”. We will need these later so copy and paste them out into your notepad file.

Further down, make your “Application Type” set to “Single Page Application”.

On this same settings page for your application, scroll down and find the “Allowed Callback URLs”. This should be set up to allow a call back to your front end (e.g. React, Angular etc). But it should also allow for a Swagger callback. (Confusing, I know). But to put it simply, pop in the URL of your local web application *and* the domain of your API application like so :

Remember to hit “Save Changes” right at the bottom of the page.

Adding Configuration To ASP.NET Core

In our .NET Core solution, open up the appsettings.json file. In there, add a JSON node like so :

"Authentication": {
  "Domain": "https://mydomain.us.auth0.com/",
  "Audience": "https://myproduct",
  "ClientId": "6ASJKHjkhsdf776234"
}

We won’t actually use this configuration anywhere except in our startup method, so for now, don’t worry about creating a C# class to represent this configuration.

Next Steps

So far we’ve set up everything we need on the Auth0 side, and we’ve grabbed all the configuration values and put them into ASP.NET Core. Now, we need to set up everything related to authentication inside our .NET Core App. You can check out the next step in the series here : https://dotnetcoretutorials.com/2021/02/14/using-auth0-with-an-asp-net-core-api-part-2-asp-net-core-authentication/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Some time back, I wrote a post about PostSharp Threading. I was incredibly impressed by the fact that a complicated task such as thread synchronization had been boiled down to just a couple of C# attributes. While writing the post, I also took a look at the other libraries available from PostSharp, and something that caught my eye was the PostSharp Logging framework. Now I’ve seen my fair share of logging frameworks so at first, I wasn’t that jazzed. Generally speaking when I see a new logging library get released, it’s just another way to store text logs and that’s about it. But PostSharp Logging does something entirely new, without completely re-inventing the wheel.

Of course we are going to dig into all the goodness, but at an overview level. PostSharp Logging is more like a mini APM by automatically logging what’s going on inside your application, rather than just giving you some static “Logger.Error(string message)” method to output logs to. And instead of making you configure yet another logging platform with complicated XML files and boilerplate code, it just hooks into whatever logging framework you are already using. Serilog, Log4Net, and even just plain old ASP.NET Core logger factory are supported with very little setup.

Setting Up Logging

I’ve kind of sold the zero setup time a little bit here so let’s look at actually what’s required.

The first thing we have to do is install the nuget package for our particular logging framework. Now this might get complicated if you are using things like Serilog or Log4Net on top of the .NET Core logger, but for me, I’m just looking to pump all messages to the standard .NET Core output. So all I need to do is install the following two packages :

Install-Package PostSharp.Patterns.Diagnostics
Install-Package PostSharp.Patterns.Diagnostics.Microsoft

Next, I have to do a little bit of work in my program.cs to add the PostSharp logger :

public static void Main(string[] args)
{
    var host = CreateHostBuilder(args).Build();
    var loggerFactory = (ILoggerFactory)host.Services.GetService(typeof(ILoggerFactory));
    LoggingServices.DefaultBackend = new MicrosoftLoggingBackend(loggerFactory);
    host.Run();
}

This might seem a little complicated, but actually you’re just going to be copy and pasting this from the documentation from PostSharp, there actually isn’t much thought involved!

And that’s it! Now we can simply add the [Log] attribute to any method and have it log some pretty juicy stuff. For example, consider the following code :

[Log]
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    return Ok($"Hello {name}!");
}

With nothing but the log attribute, I suddenly see these sorts of messages popping up when I call a URL such as /Hello?name=Bob.

dbug: PostSharpLogging.Controllers.TestController[2]
      TestController.Hello("Bob") | Starting.
dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("Bob") | Succeeded: returnValue = {OkObjectResult}.

Notice how I now capture the method being executed, the parameters being executed, and what the result was. This can be incredibly important because not only are you capturing what methods are running, but you are capturing the input and output of those methods. This could be invaluable if you’re trying to debug under what circumstances a particular method fails or produces an unexpected response.

Writing Detailed APM Style Logging Messages

Earlier I spoke a little bit about how I thought PostSharp.Logging was more like a mini APM rather than a logging framework. That doesn’t mean it can’t log your standard text messages, but at the same time, it has incredible capability to “time” methods and capture exactly what’s going on in your application with very little set up.

All I need to do is create a file in the root of my project called postsharp.config. In it, I add the following :

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.postsharp.org/1.0/configuration">
  <Logging xmlns="clr-namespace:PostSharp.Patterns.Diagnostics;assembly:PostSharp.Patterns.Diagnostics">
    <Profiles>
      <LoggingProfile Name="Detailed" IncludeSourceLineInfo="True" IncludeExecutionTime="True" IncludeAwaitedTask="True">
      </LoggingProfile>
    </Profiles>
  </Logging>
</Project>

It may look confusing at first, but the PostSharp documentation gives you almost all of this out of the box. So what are we now adding to our logs?

  • Capturing the source line info (e.g. What line number is being executed).
  • Capturing the total execution time for a method.
  • Including awaited tasks (More on this later!). But this means that we can actually see when a task is really awaited which is invaluable to solving deadlock issues.

All of this is combined to create named logging profile called “Detailed”. Named profiles are handy because we can now change all of the logging for our project from this one configuration file, instead of going around and modifying Log attributes one by one.

It does mean that we have to modify our Log attribute to look like this :

[Log("Detailed")] // Pass in our log profile name
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    return Ok($"Hello {name}!");
}

And now if we run things?

dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("Bob") | Succeeded: returnValue = {OkObjectResult}, 
      executionTime = 0.40 ms, 
      source = {WeatherForecastController.cs: line 18}.

So now not only are we capturing the input and output, but we are also capturing the total execution time of the method as well as the actual line number of the code. If there was a particular input to this method that caused a slow down or a noticeable performance impact, then we would be able to capture that easily. In fact, let’s test that out now!

Capturing Performance Degradations With PostSharp Logging

I want to create an artificial delay in my application to test how PostSharp Logging identifies this. But before I do this, I want to explain a concept called “Wall Time”.

Wall Time is also sometimes called Wall Clock Time, or even just Real World Time. What it means is that if I’m timing the performance of my application, the only real metric I care about is the actual time a user sits there waiting for a response. So it’s the time from a user say, clicking a button, to actually seeing a response. We call this Wall Time or Wall Clock Time, because if there was a clock on the wall, we could use it to time the response. Now where this can deviate slightly when compared to things such as “CPU Time”. CPU Time refers to how much time the CPU actually spent completing your task. This may differ because the CPU may be juggling work, or it may delay your work because it’s processing someone else’s request, or you may even have an intentional delay in your code.

Confused? Maybe this simplified diagram will help.

Notice how our user in blue sent a request to the CPU, but it was busy servicing our user in red. Once it finished red’s tasks, it then swapped to blue. If you asked the CPU how long it spent working on blue’s task, it will give a very different answer to if you asked the blue user how long they waited. Both timing’s are important, but it’s an important distinction to make when you are building software for end users.

OK, so with that out of the way, why do I bring it up now? Well there is a very large APM product on the market right now that gives timings in CPU Time. While helpful, this was actually incredibly irritating because it doesn’t capture the time a user actually spent waiting. And there is a very easy test for this, and that is to use Task.Delay to simulate the CPU not doing work.

Let’s modify our code to look like so :

[Log("Detailed")]
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    if(name == "wade")
    {
        await Task.Delay(1000);
    }

    return Ok($"Hello {name}!");
}

Now if I pass in the name “wade”, I’ll be forced to wait an extra 1000ms before I am given a response. So how does PostSharp log this?

dbug: PostSharpLogging.Controllers.TestController[16]
      TestController.Hello("wade") | Awaiting: asyncCallId = 1, awaitedMethod = Task.Delay
dbug: PostSharpLogging.Controllers.TestController[32]
      TestController.Hello("wade") | Resuming: asyncCallId = 1, awaitedMethod = Task.Delay
dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("wade") | Succeeded: returnValue = {OkObjectResult}, executionTime = 1038.39 ms

Interesting, the first thing to note is that because I earlier turned on logging for awaited methods, I can now even see when a method is actually awaited, and when it’s resumed. This is really important when working with async/await because not every time you await a method, do you truly await it (But more on that in another post).

Most importantly, look at our execution time! 1038ms. PostSharp is indeed logging the execution time correctly as it pertains to wall time. This is exactly what we want. It may seem like something so simple, but as I’ve said, I know of APM products on the market right now that can’t get this right.

There’s still something more I want to do with this code however. We’re still logging an awful lot when really we just want to capture logging if the performance is degraded. And of course, PostSharp Logging provides us with this. If we modify our logging profile to look like so :

<LoggingProfile Name="Detailed" ExecutionTimeThreshold="200" IncludeSourceLineInfo="True" IncludeExecutionTime="True" IncludeAwaitedTask="True"> 
</LoggingProfile>

We set the ExecutionTimeThreshold to be 200ms. And anything over that we get :

warn: PostSharpLogging.Controllers.TestController[32768]
      TestController.Hello("wade") | Overtime: returnValue = {OkObjectResult}, executionTime = 1012.60 ms, threshold = 200 ms}.

Notice how this is a “Warn” message, not a debug message. Now we can perfectly isolation performance impacts to this particular input, rather than sifting through thousands of logs.

Logging Multiple Methods

Let’s say that you’ve already got a large existing project, but you want to add logging to all controller actions. If we used our code above, we would have to go through copy and pasting our Log attribute everywhere which could be quite the task. And again, if we ever want to remove this logging, we have to go through deleting the attribute.

But PostSharp has us covered with “Multicasting”. Multicasting is the ability to apply the attribute to multiple declarations using a single line of code. And best of all, it allows us to filter where we apply it by using wildcards, regular expressions, or even filtering on some attributes. That means it’s not an all or nothing approach. We can almost fine tune where we log just as well as if we were placing the Log attribute manually on each method.

To get started, create a file called “GlobalLogging.cs” and place it in the root of your project.

Inside, we’re gonna add the following :

using PostSharp.Extensibility;
using PostSharp.Patterns.Diagnostics;

[assembly: Log(AttributePriority = 1, 
    ProfileName = "Detailed",
    AttributeTargetTypes ="MyProjectName.Controllers.*", 
    AttributeTargetMemberAttributes = MulticastAttributes.Public)]

All we are saying is, add the Log attribute, with the ProfileName of “Detailed”, to all target types that are under the controllers namespace. I’m also going to add another filter to say only do this for public methods.

Running my project now, I receive all of the same logging on all of my controller methods, but without having to manually add the Log attribute!

Again, the simplicity of PostSharp stands out. We can add multiple of these global attributes to this file, all with specifically fine tuned wildcards/regexes, and just have it… work. I almost want to write more about all the options you can do with this, but it’s just all so simple and works out of the box, that I’m literally just giving one liners to completely re-invent your logging. It’s really great stuff.

Who Is This Library For?

If you’re working on a software stack that requires you to be constantly managing performance and fine tuning the system, then I think PostSharp Logging is kind of a no brainer. I think the name of “Logging” implies that all it’s really going to do is write text logs for you, but it’s so much more powerful than that.

I’ve used off the shelf APM products that don’t do as good of a job really isolating down to the method logging, and those come with a monthly subscription and a slow, lag ridden portal to boot. I think the bring-your-existing-logging-framework is one of the most powerful aspects of PostSharp, just being able to use what you already have, but supercharge those logs along the way.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.