When it comes to unit testing, I’m a lover of mocks. I can’t get enough of them. Infact, I wrote an entire “TestingContext” helper to help me “auto mock” my classes which I think you should definitely check out if you haven’t already! I can’t think of a unit testing project that doesn’t utilize the .NET library Moq in some way.

But! In very rare cases, I use stubs, and in even rarer cases (But still sometimes), I use fakes. Often when you are trying to test “framework” code such as creating a custom ASP.NET Core Filter or Middleware, you have no option but to use stubs and/or fakes.

Now the thing is, I don’t want to get into the war of mocks vs stubs vs fakes, because frankly… Mocks wins hands down for me (Although if you google this, you’ll find thousands of people that disagree), but I do want to touch on exactly what is the difference between the three. When I talk to developers who are saying “I love fakes, I always use fakes”, and then I start probing them to elaborate, or maybe check their code, what I find is that either they are using stubs/mocks instead, or maybe even a combination of all three. And so hopefully this guide can help you understand what each of these “test doubles” actually are, then you can go onward to your holy war against the one you don’t like against your peers!

What Is A Fake?

A fake is an object used in place of a concrete implementation that has some “smarts” to it. Usually a shortcut implementation that makes it useful across different unit tests, but stops short of being an integration test.

By far the most common example I see of this is in data repositories. Let’s say I have a standard SQL Server repository that looks like so :

public interface IUserRepository
{
    void Insert(object user);
    List<object> GetAllUsers();
}

public class UserRepository : IUserRepository
{
    public List<object> GetAllUsers()
    {
        //Go to the database and fetch all users. 
    }

    public void Insert(object user)
    {
        //Insert a user into the database here. 
    }
}

You’ll have to use your imagination when it comes to the actual implementation part, but essentially we have two methods that call into a database. We have the Insert which adds users, and the GetAllUsers which returns all users in the database.

When it comes to unit testing a class that may utilize IUserRepository, for example a UserService, we have a bit of a problem. We don’t want our unit tests reaching out to the database, and frankly we don’t really care about the implementation of UserRepository. So we create a fake :

public class FakeUserRepository : IUserRepository
{
    private List<object> _users = new List<object>();

    public List<object> GetAllUsers()
    {
        return _users;
    }

    public void Insert(object user)
    {
        _users.Add(user);
    }
}

In our fake, we actually take the inserted user, and add it to an internal list. When GetAllUsers is called, we return that same list. Now whenever a unit test requires a call to IUserRepository, we can supplement in the FakeUserRepository, and instantly things just “work”.

The main thing here is to understand that the Insert affects the GetAllUsers call. It’s a “real” implementation that actually performs like a repository would, just without an actual database behind the scenes.

What Is A Stub?

A stub is an implementation that returns hardcoded responses, but does not have any “smarts” to it. There is no tying together of calls on the object, instead each method just returns a pre-defined canned response.

Let’s look at how we would create a stub for the above :

public class StubOneUserRepository : IUserRepository
{
    public List<object> GetAllUsers()
    {
        return new List<object>();
    }

    public void Insert(object user)
    {
        //Do nothing
    }
}

So it’s somewhat similar to our fake but… Not quite. See how the insert does not affect the GetAllUsers, and the GetAllUsers itself returns a canned response with nothing in it. Anything I do to this object during a test doesn’t change how it functions.

What we generally find is that Stubs are used to fulfil a condition inside the code, but not to test the functionality. For example if my code calls “Insert” on the repository, but I don’t really care about what happens with that data for my particular test, then a stub makes sense rather than the overhead of filling out how a fake would work.

Using a repository makes this example harder than it should be because Repositories invariably should return dynamic data to test various conditions in your code. So let me use another example that would be more likely to require a stub in the real world.

Let’s say I have an interface to tell me if a user is “authenticated” or not. It would look like so :

public interface IUserAuthenticatedCheck
{
    bool IsUserAuthenticated();
}

Now let’s say for my tests, I just always need the user to be authenticated, maybe to fulfil some underlying framework condition. I could use a stub like so :

public class StubUserAuthenticatedCheckTrue : IUserAuthenticatedCheck
{
    public bool IsUserAuthenticated() => true;
}

So no smarts on whether a user should be authenticated, no changing of the value, just a straight “always return true” method. This is what stubs are great at.

What Is A Mock?

A mock is a pre-programmed object that can have dynamic responses/behaviour defined as part of the test. They do not need to be concrete in implementation and (generally) don’t need behaviour to be shared amongst tests.

So where would we use Mocks? Well it’s anywhere you want to be relatively dynamic, for that particular test, to satisfy conditions. As an example, let’s say I am writing a test that calls out to the following interface :

public interface IShopService
{
    bool CheckShopIsOpen(int shopId);
}

So all we are doing is checking if a shop is open or closed. The actual implementation class for this may call out to a database, or a webservice/api of some sort, but we don’t want to do that as part of our unit test.

If we used fakes here, we would need to add some dummy method to be able to say whether a shop should be open or closed. Maybe something like this :

public class FakeShopService : IShopService
{
    public bool ShouldShopBeOpen { get; set; }

    public bool CheckShopIsOpen(int shopId)
    {
        return ShouldShopBeOpen;
    }
}

Eh, not great in my eyes. We are adding new methods just to be able to control whether a shop is open or closed for the test.

If we used stubs, we would have to hardcode a response of true/false right into the concrete class. Maybe even something like this :

public class StubShopService : IShopService
{
    private Dictionary<int, bool> _shops = new Dictionary<int, bool>
    {
        { 1, true },
        { 2, false }
    };

    public bool CheckShopIsOpen(int shopId)
    {
        return _shops[shopId];
    }
}

This works with a predefined list of ids, and whether a shop will be open or closed. It’s nice, but if you use this in a test and pass in the id of 1, it’s not immediately clear from the test why you got the response of true right?

So how would you solve this using mocks? You would write something like this right there in your test (I am using the Moq library for this!) :

var _mockShopService = new Mock<IShopService>();
_mockShopService.Setup(x => x.CheckShopIsOpen(1)).Returns(true);

This is so clear when it’s right there in your test, when our mock is used, and CheckShopIsOpen with an id of 1, then we return true. It’s also specific to this test, and doesn’t force us to hardcode anything anywhere, or create concrete classes.

And when we have a test that requires the shop id of 1 to be false..

_mockShopService.Setup(x => x.CheckShopIsOpen(1)).Returns(false);

Pretty easy stuff!

When To Use Mocks vs Fakes vs Stubs

I know I said I wasn’t going to do this –  I just wanted to explain the differences between the three methods, but.. I feel I have to atleast throw my 2 cents in here on when to use each one. And I’ll let the comments on this post fall as they may!

  • Use Fakes when you want a re-usable concrete implementation that works similar to the real implementation with re-usability across tests (e.g. In Memory Database)
  • Use Stubs when you want a hardcoded response/implementation that will be re-used across tests
  • Use Mocks when you need dynamic responses for individual tests, that may not necessarily require re-usability across tests

If I’m being honest.. I use Mocks even for things like Repositories, it just makes tests for me very readable and very easy to follow when I’m explicitly outlining what’s going to happen. But maybe that’s just me!

For now, hopefully you understand a little better what each of these test doubles are used for and maybe a little more of a nudge on when to use each one.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a short post, but one I felt compelled to write after I saw some absolutely bonkers ways of validating emails in a .NET Core API. I recently stumbled upon a war between two developers who were duking it out on a pull request/code review. It all centred around the “perfect” regex for validating an email.

And you may be thinking, isn’t it use [email protected]? Well.. Apparently not. Just check out this rather verbose stackoverflow answer here on the subject : https://stackoverflow.com/a/201378/177516

The answer given has the regex looking a bit like so :

(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])

…not the most concise.

Another example might be if we take a look at how Angular validates email. Also with a Regular Expression found here : https://github.com/angular/angular/blob/master/packages/forms/src/validators.ts#L98

And it looks a bit like so :

^(?=.{1,254}$)(?=.{1,64}@)[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$

A little bit different, but still a pretty massive regex pattern. So, given these options (And probably many many more), which should we copy and paste into our validation for our model?

public class CreateAccountViewModel
{
	[RegularExpression("SoMeCrAzYReGeX")]
	public string Email { get; set; }
}

The answer is none of the above. .NET Core (And .NET Framework) have an inbuilt validator for emails like so :

public class CreateAccountViewModel
{
	[EmailAddress]
	public string Email { get; set; }
}

Nice and simple without much fuss. But the question then is, what Regex does .NET Core/.NET 5+ use out of the box? The answer is.. It doesn’t use regex at all!

The logic is actually rather simple :

  • Does the value have an @ symbol?
  • Is the @ symbol in any position but the first or last index of the string

No regex required!

Is this a perfect validator? Probably not, it probably allows through emails that aren’t quite up to spec with the email address RFC, but it does enough to catch the 99.99%. So next time people are arguing over the perfect email regex, maybe the answer is to not use regex at all!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Over the past year or so, I’ve been taking a look at the PostSharp framework. It started with looking at how complex multi-threaded scenarios can be handled with PostSharp Threading, then I took a dive into creating a mini-APM logger with PostSharp Logging. Since doing so, I reached out to people I’ve worked with, both past and present, if they’ve had experience using PostSharp, and the response was overwhelmingly positive. And the feedback kept coming back to the same thing time and time again, that PostSharp just takes care of super common scenarios without having to work from scratch every time.

And that brings me to the PostSharp Caching library. At some point, on every project, a developer is going to implement caching. And as the saying goes…

There are two hard things in Computer Science : cache invalidation, naming things, and off-by-one errors.

PostSharp unfortunately can’t help with the latter two, but it can make the “simple at first, complex over time” act of caching and cache invalidation an absolute breeze.

Caching With PostSharp Attributes

Before jumping to invalidating our cache, we need to actually have items cached to begin with. I’m going to set up a simple console application like so :

static async Task Main(string[] args)
{
    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
}

static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

I’ve added in a Task.Delay in our SayHello method, purely to demonstrate a slow function and make our caching more obvious. It’s meant to simulate maybe a very slow and unavoidable database call – e.g. Something you may want to cache in the first place!

When we run this, it’s pretty obvious what’s going to happen.

Hello Wade! - 1019
Hello Wade! - 2024
Hello John! - 3031
Hello Wade! - 4043

Between each call to SayHello, there’s about a 1 second delay.

Now let’s add PostSharp. First we need to use our Package Manager Console to install the PostSharp Caching library :

Install-Package PostSharp.Patterns.Caching

Then we just have to add two lines :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
}

//Cache this method
[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

We just have to set up what our caching backend will be (In this case, just a memory cache). And then do nothing more but add the [Cache] attribute to any method we wish to cache the result of.

Running this again :

Hello Wade! - 1077
Hello Wade! - 1080
Hello John! - 2102
Hello Wade! - 2102

Perfect! We can see that the first attempt to SayHello with the param “Wade” took 1 second, but each subsequent time was almost instant (From the cache). Interestingly though, we can see that when we pass in a different name, in our case “John”, it took the full second. That’s because PostSharp Caching takes into account the parameters of the method, and creates unique caches based on the input. Pretty impressive stuff, but it doesn’t stop there!

PostSharp Caching Is Highly Configurable

Of course, the above is just a simple example. We can extend this out to include things like absolute expiration in minutes :

[Cache(AbsoluteExpiration = 5)]

Or sliding expiration :

[Cache(SlidingExpiration = 5)]

Or you can even have a hierachy where you can place a CacheConfiguration on an entire class :

[CacheConfiguration(AbsoluteExpiration = 5)]
public class MyService
{
    [Cache]
    public string HelloWorld()
    {
        return "Hello World!";
    }
}

What I’m trying to get at is that things are highly configurable. You can even configure caching (Including disabling caching completely) at runtime using Caching Profiles like so :

//Make all caching default to 5 minutes. 
CachingServices.Profiles.Default.AbsoluteExpiration = TimeSpan.FromMinutes(5);
//On second thought, let's just disable all caching for the meantime. 
CachingServices.Profiles.Default.IsEnabled = false;

So stopping short of giving you the laundry list of configuration options, I really just want to point out that this thing is super configurable. Everytime I thought “Well what about if you want to….” the documentation was there to give me exactly that.

Controlling Cache Keys

As we’ve seen earlier, our cache key is automatically build around the parameters of the method we are trying to cache. It’s actually a little more complicated than that, it uses the enclosing type (e.g. a ToString() on the enclosing class), the method name, and the method parameters. (And even then, it actually uses more, but let’s just stick with that for now!).

Imagine I have code like so :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade", Guid.NewGuid())) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade", Guid.NewGuid())) + " - " + watch.ElapsedMilliseconds);
}

[Cache]
static async Task<string> SayHello(string name, Guid random)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

Yes I know it’s a rather obtuse example! But the fact is the random guid being passed into the method is breaking caching. When I run this I get :

Hello Wade! - 1079
Hello Wade! - 2091

But if it’s a parameter that I really don’t care about as part of caching, I want to be able to ignore it. And I can! With another attribute of course :

[Cache]
static async Task<string> SayHello(string name, [NotCacheKey]Guid random)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

And running this I get :

Hello Wade! - 1079
Hello Wade! - 1082

While this is a simple example, I just wanted to bring up the fact that while PostSharp Caching is super easy to implement and can work right out of the box, it’s also crazy extensible and there hasn’t been a scenario yet that I’ve been “stuck” with the library not being able to do what I want.

Invalidating Cache

For all the talk my intro did about invalidating cache being hard.. it’s taken us a while to get to this point in the post. And that’s because PostSharp makes it a breeze!

Your first option is simply annotating the methods that update/insert new values (e.g. where you would normally invalidate cache). It looks a bit like this :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    UpdateHello("Wade");
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);

    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    UpdateHello("Wade");
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
}

[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

[InvalidateCache(nameof(SayHello))]
static void UpdateHello(string name)
{
    //Do something here. 
}

A long example but I wanted to point something out. That when you ask to invalidate the cache, it takes the parameters of your update method to match the keys of your cached method. So in this case, because I Update the name “Wade” twice, it only ever clears that cache key. It doesn’t simply wipe the entire cache for the entire method.

So our output becomes :

Hello Wade! - 1062
Hello Wade! - 2092
Hello John! - 3111
Hello John! - 3111

But maybe the whole attribute thing isn’t for you. You can actually invalidate cache imperatively also like so :

[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

[InvalidateCache(nameof(SayHello))]
static void UpdateHello(string name)
{
    //Do some work

    //Now invalidate the cache. 
    CachingServices.Invalidation.Invalidate(SayHello, name);
}

What I love about this is that I’m passing a reference to the method name, and the value of the parameter. Even though I’m invalidating a specific cache item, I’m still not having to work out what the actual cache key is. That means should the way in which we build the cache key for SayHello ever change, the invalidation of the cache never changes because it’s an actual strongly typed reference to the method.

Obviously the added benefit of invalidating cache imperatively is that you can both conditionally define logic on when you want to invalidate, and be able to invalidate cache from inside an existing method without adding attributes. That being said, the attributes are really useful if a lot of your logic is CRUD interfaces.

Distributed Caching Backends

I don’t want to go on too long about caching backends because, for the most part, you’re going to be using Redis for distributed cache, and a local in-memory cache if you just want something inside your one instance of your application.

But I did want to mention one feature that I’ve had to implement myself many times when using other caching libraries. That is, the combination of using a Redis Server for distributed cache, but *also* a local in memory cache. The reason being, if my application is horizontally scaled, of course I want to use Redis so that everyone can share the same cached entities. However, fetching from Redis incurs a level of overhead (namely network) to go and fetch the cache each time. So what I’ve generally had to implement myself, is keeping a local store of in-memory cache for faster local retrieval, as well as managing the Redis distributed cache for other machines.

But of course, PostSharp caching makes this a breeze :

RedisCachingBackendConfiguration redisCachingConfiguration = new RedisCachingBackendConfiguration();
redisCachingConfiguration.IsLocallyCached = true;

In general, the caching backends used in PostSharp are actually very extensible. While Redis and In-Memory serves me just fine, you could implement PostSharps interface to add in your own backend as well (For example SQL). Then you get the power of the attributes and all of the goodness of PostSharp in code, but with your own one off caching backend.

Who Is This Library For?

When I recommend libraries or products, I like to add a little bit around “Who is this for?”. Because not everything I use works in small start ups, or vice versa, in large enterprises. However, I really believe PostSharp Caching has the ability to fit into almost any product.

Early in my development career, I thought that I could develop everything myself. And all it really meant was taking away time from the features my customers really cared about, and devoting time to re-inventing the wheel on some boilerplate code. When I thought not only about putting a dollar amount on my time, but the opportunity cost lost of not having additional features, it suddenly made so much more sense to leave boilerplate code, like caching, to people that actually dedicate their time to getting it right. PostSharp Caching is one of those products, one that you can just plug in, have it work right away, and save your time for features that actually matter.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently ran into a problem where my .NET API was returning an error 415. The full error gives you a hint as to what the actual issue is : “415 Unsupported Media Type”, although this can lead you down a wild goose chase of stackoverflow answers.

In short, the API is expecting a post request with a particular content-type header, but the caller (Or maybe your front end) is using a different media type. There are actually some other gotchas that are incredibly frustrating to figure out in .NET too that can blow this entire thing up without you noticing. But let’s get on to it!

Check Your Front End Caller

The first thing we need to do is understand what our API is expecting. In general, API’s these days are expecting JSON requests. In some cases, they are expecting a classic “form post”. These are not the same thing! But whichever you use, your front end caller (Whether that be a javascript library or another machine), must attach the correct content type when making a request to the API.

For example, if I have a JSON API, and I make the following call from jQuery :

 $.ajax({
  url: '/myapiendpoint',
  type: 'POST'
});

This actually won’t work! Why? Because the default content-type of an Ajax request from jQuery is actually “application/x-www-form-urlencoded”, not “application/json”. This can catch you out if you aren’t familiar with the library and it’s making calls using the default content-type.

But of course, we can go the other way where you copy and paste someone’s helpful code from stackoverflow that forces the content-type to be JSON, but you are actually using form posts :

 $.ajax({
  url: '/myapiendpoint',
  contentType: 'application/json'
  type: 'POST'
});

Don’t think that you are immune to this just because you are using a more modern library. Every HttpClient library for javascript will have some level of default Content Type (Typically application/json), and some way to override it. Often, libraries such as HttpClient in Angular, or Axios, have ways to globally override the content-type and override it per request, so it can take some time working out exactly how the front end is working.

When it comes down to it, you may need to use things like your browser dev tools to explicitly make sure that your front end library is sending the correct content-type. If it is, and you are certain that the issue doesn’t lie there, then we have to move to debugging the back end.

Checking The Consumes Attribute

If we are sure that our front end is sending data with a content-type we are expecting, then it must be something to do with our backend. The first thing I always check is if we are using the Consumes attribute. They look a bit like this :

[Consumes("application/xml")]
public class TestController : ControllerBase
{
}

Now in this example, I’ve placed the attribute on the Controller, but it can also be placed directly on an action, or even added to your application startup to apply globally, so your best bet is usually a “Ctrl + Shift + F” to find all of them.

If you are using this attribute, then make sure it matches what the front end is sending. In 99% of cases, you actually don’t need this attribute except for self documenting purposes, so if you can’t find this in use anywhere, that’s normal. Don’t go adding it if you don’t already have it and are running into this issue, because often that will just complicate matters.

In the above example, I used [Consumes(“application/xml)] as an example of what might break your API and return an error 415. If my front end has a content-type of json, and my consumes specifies I’m expecting XML, then it’s pretty clear there’s going to be a conflict of some kind we need to resolve.

Checking FromBody vs FromForm

Still not working? The next thing to check is if you are using FromBody vs FromForm correctly. Take this action for example :

public IActionResult MyAction([FromForm]object myObject)

This endpoint can only be called with non form post data. e.g. The content type must be “application/x-www-form-urlencoded”. Why? Because we are using the [FromForm] attribute.

Now if we change it to FromBody like so :

public IActionResult MyAction([FromBody]object myObject)

This can only accept “body” types of JSON, XML etc. e.g. Non form encoded content types. It’s really important to understand this difference because sometimes people change the Consumes attribute, without also changing how the content of the POST is read. This has happened numerous times for me, mostly when changing a JSON endpoint to just take form data because a particular library requires it.

ApiController Attribute

Finally, I want to talk about a particular attribute that might break an otherwise working API. In .NET Core and .NET 5+, there is an attribute you can add to any controller (Or globally) called “ApiController”. It adds certain conventions to your API, most notably it will check ModelState for you and return a nice error 400 when the ModelState is not valid.

However, I have seen API’s act very differently when it comes to modelbinding, because of this attribute. It adds some nice “conventions” for you that it will try and infer the FromBody, FromRoute, FromQuery etc for you. Generally speaking, I don’t see this breaking API’s, and for the most part, I use it everywhere. But if you are comparing two projects with the exact same controller and action setup, and one works and one doesn’t, it’s worth checking if one implements the ApiController attribute. Again, “Ctrl + Shift + F” is your friend here to find anywhere that it may be getting applied.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It was only a couple of years ago, that I learned about Debugger.Launch(), and since then, I’ve used it on many an occasion and thought “How did I ever live without this!”. It’s just such a little miracle tool when working with applications that have complex startup code that can’t be debugged easily.

Just the other day, while remembering this beauty of a function, I went back and looked at documentation for when this was released. After all, I probably went a good 5 or 6 years developing in .NET without ever using it.

My jaw almost hit the floor!

You’re telling me, that this has been in the .NET Framework since the dawn of time, and I’ve only just found out about it! UGH!

What Is Debugger.Launch?

Let me give a scenario for you. You are running an application (Such as a web application or windows service), that has startup methods you want to debug. Often this will be things like dependency injection setup, early config file reads or similar. For whatever reason, you can’t just hit F5 and start debugging. You need to run the application, then attach a debugger later. For web applications this is sometimes because you are using IIS even in development, and hitting a URL to test your application. And for things like Windows Services, you want to debug when it’s actually running as a Windows Service.

Now back in the day, I used to do this :

//Added in the startup code section
Thread.Sleep(10000); //Give myself 10 seconds to attach a debugger

Basically sleep the application for 10 seconds to allow myself time to attach a debugger. This kind of works. But it’s not exactly a strict science is it? If I attach early, then I’m left sitting there waiting out the remainder of the sleep time, and if I attach late, then I have to restart the entire process.

And that’s where Debugger.Launch() comes in :

//Added in the startup code section
System.Diagnostics.Debugger.Launch(); //Force the attachment of a debugger

You’re probably wondering how exactly does a debugger get “forced” to attach. Well consider the following console application :

using System;

System.Diagnostics.Debugger.Launch();
Console.WriteLine("Debugger is attached!");

Imagine I build this application, and run it from the console (e.g. Not inside Visual Studio). I would then see the following popup :

Selecting Visual Studio, it will then open, and start debugging my application live! Again, this is invaluable for being able to attach a debugger at the perfect time in your start up code and I can’t believe I went so long in my career without using it.

How About Debugger.Break()?

I’ve also seen people use Debugger.Break(), and I’ve also used it, but with less success than Debugger.Launch().

The documentation states the following :

If no debugger is attached, users are asked if they want to attach a debugger. If users say yes, the debugger is started. If a debugger is attached, the debugger is signaled with a user breakpoint event, and the debugger suspends execution of the process just as if a debugger breakpoint had been hit.

But that first sentence is important because I find it less reliable than Launch. I generally have much less luck with this prompting a user to add a debugger. However! I do have luck with this forcing the code to break.

When a debugger is already attached (e.g. You attached a debugger at the right time or simply pressed F5 in Visual Studio), Debugger.Break forces the code to stop execution much like a breakpoint would. So in some ways, it’s like a breakpoint that can be used across developers on different machines rather than some wiki page saying “Place a breakpoint on line 22 to test startup code”.

It probably doesn’t sound that useful, except for the scenario I’m about to explain…

When Debugger.Launch() Doesn’t Work

In very rare cases, I’ve been stuck with Debugger.Launch not prompting the user to debug the code. Or, in some cases, me wanting to debug the code with an application not presented within the popup. There’s actually a simple solution, and it almost goes back to our Thread.Sleep() days.

Our solution looks like :

//Spin our wheels waiting for a debugger to be attached. 
while (!System.Diagnostics.Debugger.IsAttached)
{
    Thread.Sleep(100); //Or Task.Delay()
}

System.Diagnostics.Debugger.Break();
Console.WriteLine("Debugger is attached!");

It works like so :

  • If a debugger is not attached, then simply sleep for 100ms. And continue to do this until a debugger *is* present.
  • Once a debugger is attached, our loop will be broken, and we will continue execution.
  • The next call to Debugger.Break() immediately stops execution, and acts much like a breakpoint, allowing us to start stepping through code if we wish.

Now again, I much prefer to use Debugger.Launch, but sometimes you can’t help but do a hacky loop to get things working.

Another extension of this is to wrap the code an IF DEBUG statement like so :

#if DEBUG
//Spin our wheels waiting for a debugger to be attached. 
while (!System.Diagnostics.Debugger.IsAttached)
{
    Thread.Sleep(100); //Or Task.Delay()
}

System.Diagnostics.Debugger.Break();
#endif
Console.WriteLine("Debugger is attached!");

This means that should this code make it into production, it doesn’t just spin it’s wheels with no one able to work out why nothing is running. In my opinion however, any Debugger functions should not make it into checked in code.

Using these tools, you can now debug code that you once thought was impossible to do.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It’s somewhat surprising in the 20 years .NET (And C#) has been out, there hasn’t been an official implementation of a Priority Queue. It hasn’t stopped people hacking together their own Priority Queues, and indeed, even Microsoft has had several implementations of priority queues buried internally in the framework, but just never exposed for the public. Finally, Microsoft has come to the party and implemented an official Priority queue in .NET 6. Yes, .NET 6.

If you were coming here because you wanted an implementation for .NET Core, .NET 5, or even .NET 4.6.X, then unfortunately you are out of luck. There are implementations floating around the web, but slowly these will go away with the official .NET Priority Queue coming to the framework.

If you are new to .NET 6 and want to know what you need to get started, check out our guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

What Is A Priority Queue?

Before we get started, it’s worth talking about what exactly a Priority Queue is. A Priority Queue is a Queue, where each item holds a “priority” that can be compared against other queue items. When an item is dequeued, the item with the highest priority is popped off the queue, regardless of when it was put on. So if we think of a standard queue as first in, first out (FIFO), and the stack type being last in, first out (LIFO), then a Priority Queue is.. well.. It doesn’t get a nice acronym. It’s more like, whatever in, highest priority out!

Priority can be complex as we will soon see as you can implement custom comparers, but at it’s simplest it could just be a number where the lower the number (e.g. 0 being the highest), the higher the priority.

Priority Queues have many uses, but are most commonly seen when doing work with “graph traversals” as you are able to quickly identify nodes which have the highest/lowest “cost” etc. If that doesn’t make all that much sense to you, it’s not too important. What’s really good to know is that there is a queue out there that can prioritize items for you!

Priority Queue Basics

Consider the very basic example :

using System.Collections.Generic;

PriorityQueue<string, int> queue = new PriorityQueue<string, int>();
queue.Enqueue("Item A", 0);
queue.Enqueue("Item B", 60);
queue.Enqueue("Item C", 2);
queue.Enqueue("Item D", 1);

while (queue.TryDequeue(out string item, out int priority))
{
    Console.WriteLine($"Popped Item : {item}. Priority Was : {priority}");
}

The output of this should be relatively easy to predict. If we run it we get :

Popped Item : Item A. Priority Was : 0
Popped Item : Item D. Priority Was : 1
Popped Item : Item C. Priority Was : 2
Popped Item : Item B. Priority Was : 60

The lower the integer, the higher the priority, and we can see our items are always popped based on this priority regardless of the order they were added to the queue. I wish I could extend out this bit of the tutorial but.. It really is that simple!

Using Custom Comparers

The above example is relatively easy to comprehend since the priority is nothing but an integer. But what if we have complex logic on how priority should be derived? We could build this logic ourselves and still use an integer priority, or we could use a custom comparer. Let’s do the latter!

Let’s assume that we are building a banking application. This is a fancy bank in the middle of London city, and therefore there is priority serving of anyone with the title of “Sir” in their name. Even if they show up at the back of the queue, they should get served first (Disgusting I know!).

The first thing we need to do is work out a way to compare titles. For that, this piece of code should do the trick :

class TitleComparer : IComparer<string>
{
    public int Compare(string titleA, string titleB)
    {
        var titleAIsFancy = titleA.Equals("sir", StringComparison.InvariantCultureIgnoreCase);
        var titleBIsFancy = titleB.Equals("sir", StringComparison.InvariantCultureIgnoreCase);


        if (titleAIsFancy == titleBIsFancy) //If both are fancy (Or both are not fancy, return 0 as they are equal)
        {
            return 0;
        }
        else if (titleAIsFancy) //Otherwise if A is fancy (And therefore B is not), then return -1
        {
            return -1;
        }
        else //Otherwise it must be that B is fancy (And A is not), so return 1
        {
            return 1;
        }
    }
}

We simply inherit from IComparer, where T is the type we are comparing. In our case it’s just a simple string. Next, we check whether each of the passed in strings are the word “sir”. Then do our ordering based on that. In general, a comparer should return the following :

  • Return 0 if the two items based in are equal
  • Return -1 if the first item should be compared “higher” or have higher priority than the second
  • Return 1 if the second item should be compared “higher” of have higher priority than the first

Now when we create our queue, we can simply pass in our new comparer like so :

PriorityQueue<string, string> bankQueue = new PriorityQueue<string, string>(new TitleComparer());
bankQueue.Enqueue("John Jones", "Sir");
bankQueue.Enqueue("Jim Smith", "Mr");
bankQueue.Enqueue("Sam Poll", "Mr");
bankQueue.Enqueue("Edward Jones", "Sir");

Console.WriteLine("Clearing Customers Now");
while (bankQueue.TryDequeue(out string item, out string priority))
{
    Console.WriteLine($"Popped Item : {item}. Priority Was : {priority}");
}

And the output?

Clearing Customers Now
Popped Item : John Jones. Priority Was : Sir
Popped Item : Edward Jones. Priority Was : Sir
Popped Item : Sam Poll. Priority Was : Mr
Popped Item : Jim Smith. Priority Was : Mr

We are now serving all Sirs before everyone else!

When Is Priority Worked Out?

Something I wanted to understand was when is priority worked out? Is it on Enqueue, is it when we Dequeue? Or is it both?

To find out, I edited my custom comparer to do the following :

Console.WriteLine($"Comparing {titleA} and {titleB}");

Then using the same Enqueue/Dequeue above, I ran the code and this is what I saw :

Comparing Mr and Sir
Comparing Mr and Sir
Comparing Sir and Sir
Clearing Customers Now
Comparing Mr and Mr
Comparing Sir and Mr
Popped Item : John Jones. Priority Was : Sir
Comparing Mr and Mr
Popped Item : Edward Jones. Priority Was : Sir
Popped Item : Sam Poll. Priority Was : Mr
Popped Item : Jim Smith. Priority Was : Mr

So interestingly, we can see that when I am Enqueing, there is certainly comparison’s but only to compare the first node. So as an example, we see 3 compares at the top. That’s because I added 4 items. That tells me there is only a comparison to compare the very top item otherwise it’s likely “heaped”.

Next, notice that when I call Dequeue, there is a little bit of comparison too.. To be honest, I’m not sure why this is. Specifically, there are two comparisons happening when realistically I assumed there would only be one (To compare the current head of the queue to the next).

Next time an item is popped, again we see a single comparison. And then finally, in the last 2 pops, no comparisons at all.

I would love to explain how all of this works but at this point it’s likely going over my head! That being said, it is interesting to understand that Priority is not *just* worked out on Enqueue, and therefore if your IComparer is slow or heavy, it could be running more times than you think.

That being said, the source code is of course open so you are more than welcome to make sense and leave a comment!

How Did We Get Here?

I just want to give a shout out to the fact that Microsoft does so many things with .NET out in the open. You can see back in 2015 the original proposal for PriorityQueue here : https://github.com/dotnet/runtime/issues/14032. Most importantly, it gives the community an insight into how decisions are made and why. Not only that, but benchmarks are given as to different approaches and a few explanations on why certain things didn’t make it into the first cut of the Priority Queue API. It’s really great stuff!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Normally when loading navigation properties in EF Core, you’re forced to use the “Include” method to specify which navigational properties to pull back with your query. This is a very good practice because it means you are explicitly saying what pieces of data you actually require. In fact, up until EF Core 2.1, there wasn’t even an option to use Lazy Loaded entities (Although if you do want to do that, we have a guide on that here : https://dotnetcoretutorials.com/2019/09/07/lazy-loading-with-ef-core/ ).

Just as an example of how you might use the “Include” method,  let’s imagine I have two classes. One called “Contacts”, and one called “ContactEmail”.

class Contact
{
    public int Id { get; set; }
    public string Name { get; set; }
    public ICollection ContactEmails { get; set; }
}

class ContactEmail
{
    public int ContactId { get; set; }
    public Contact Contact { get; set; }
    public string Email { get; set; }
}

With EF Core code first, this navigational property would be handled for us based on conventions, no problem there. When querying Contacts, if we wanted to also fetch the ContactEmails at the same time, we would have to do something like so :

_context.Contact.Include(x => x.ContactEmails)
                .FirstOrDefault(x => x.Id == myContactId)

This is called “Eager Loading” because we are eagerly loading the emails, probably so we can return them to the user or use them somewhere else in our code.

Now the problem with this is what if we are sure that *every* time we load Contacts, we want their emails at the same time? We are certain that we will never be getting contacts without also getting their emails essentially. Often this is common on one-to-one navigation properties, but it also makes sense even in this contact example, because maybe everywhere we show a contact, we also show their emails as they are integral pieces of data (Maybe it’s an email management system for example).

AutoInclude Configuration

Up until EF Core 5, you really had no option but to use Includes. That’s changed with a very undocumented feature that has come in handy for me lately!

All we need to do is go to our entity configuration for our contact, and do the following :

builder.Navigation(x => x.ContactEmails).AutoInclude();

To be honest, I’ve never really used the Navigation configuration builder, so didn’t even know it exists. And it’s important to distinguish that you cannot write AutoInclude() on things like HasOne() or HasMany() configurations, it has to stand on it’s own like above.

And.. That’s it! Now every time I get Contacts, I also get their ContactEmails without having to use an Include statement.

Ignoring AutoInclude

Of course, there are times where you opt into AutoInclude and then the very next day, you want to write a query that doesn’t have includes! Luckily, there is a nice IQueryable extension for that!

 _context.Contact.IgnoreAutoIncludes()
    .FirstOrDefault(x => x.Id == myContactId)

Here we can easily opt out so we are never locked into always having to pull back from the database more than we need!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently had to set up a new project using Auth0 as an “Identity As A Service” provider. Essentially, Auth0 provides an authentication service using an OAuth2 flow, meaning I don’t have to store passwords, worry about passwords resets, or implement my own two factor authentication. Everything about authenticating a user is handled by Auth0, it’s great!

What’s not great is their documentation. I’ve had to use Auth0 (And Azure AD B2C) in a tonne of projects over the years. And every time, I’m reminded that their documentation just plain sucks. At a guess, I think it’s because you only do it once. So if you set up Auth0 for your product, you’re only doing that once and you’ll never have to do it again. So any pains in the documentation you quickly get over. Except if you’re me! Because I work across a whole range of projects on a contract basis, I may do a new Auth0 setup up to 3 – 4 times per year. And every time, it’s painful.

In this series, I’m going to show you how to authenticate your API using Auth0, from setting up your Auth0 tenant all the way to setting up Swagger correctly. It will serve as a great guide if it’s your first time using Auth0, and for those more experienced, it will provide a good run sheet every time you have to set up a new tenant.


This post is part of a series on using Auth0 with an ASP.NET Core API, it’s highly recommended you start at part 1, even if you are only looking for something very specific (e.g. you came here from Google). Skipping parts will often lead to frustration as Auth0 is very particular about which settings and configuration pieces you need.

Part 1 – Auth0 Setup
Part 2 – ASP.NET Core Authentication
Part 3 – Swagger Setup


Creating An Auth0 API

The first thing we need to do is create a new “API” within the Auth0 dashboard. From Auth0, click the APIs menu item, click “Create API” and fill it in similar to the following :

The Name field can be anything, and is purely used within the portal. This might be useful if you have multiple different API’s that will authenticate differently, but for the most part, you can probably name it your product.

The “Identifier” is a little more tricky. It plays a similar role to the above in that it identifies which API is being authenticated for, but… Again, if you have one API it’s not too important. I typically do https://myproductname. It does not have to be a URL at all however, but this is just my preference.

Leave the signing algorithm as is and hit Create!

Copy the Identifier you used into a notepad for safe keeping as we will need it later.

Creating Your Auth0 Application

Next we need to set up our Auth0 Application. An application within the context of Auth0 can be thought of as a “solution”. Within your solution you may have multiple API’s that can be authenticated for, but overall, they are all under the same “Application”.

By default, Auth0 has an application created for you when you open an account. You can rename this to be the name of your product like so :

Also take note of your “Domain” and “ClientId”. We will need these later so copy and paste them out into your notepad file.

Further down, make your “Application Type” set to “Single Page Application”.

On this same settings page for your application, scroll down and find the “Allowed Callback URLs”. This should be set up to allow a call back to your front end (e.g. React, Angular etc). But it should also allow for a Swagger callback. (Confusing, I know). But to put it simply, pop in the URL of your local web application *and* the domain of your API application like so :

Remember to hit “Save Changes” right at the bottom of the page.

Adding Configuration To ASP.NET Core

In our .NET Core solution, open up the appsettings.json file. In there, add a JSON node like so :

"Authentication": {
  "Domain": "https://mydomain.us.auth0.com/",
  "Audience": "https://myproduct",
  "ClientId": "6ASJKHjkhsdf776234"
}

We won’t actually use this configuration anywhere except in our startup method, so for now, don’t worry about creating a C# class to represent this configuration.

Next Steps

So far we’ve set up everything we need on the Auth0 side, and we’ve grabbed all the configuration values and put them into ASP.NET Core. Now, we need to set up everything related to authentication inside our .NET Core App. You can check out the next step in the series here : https://dotnetcoretutorials.com/2021/02/14/using-auth0-with-an-asp-net-core-api-part-2-asp-net-core-authentication/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Some time back, I wrote a post about PostSharp Threading. I was incredibly impressed by the fact that a complicated task such as thread synchronization had been boiled down to just a couple of C# attributes. While writing the post, I also took a look at the other libraries available from PostSharp, and something that caught my eye was the PostSharp Logging framework. Now I’ve seen my fair share of logging frameworks so at first, I wasn’t that jazzed. Generally speaking when I see a new logging library get released, it’s just another way to store text logs and that’s about it. But PostSharp Logging does something entirely new, without completely re-inventing the wheel.

Of course we are going to dig into all the goodness, but at an overview level. PostSharp Logging is more like a mini APM by automatically logging what’s going on inside your application, rather than just giving you some static “Logger.Error(string message)” method to output logs to. And instead of making you configure yet another logging platform with complicated XML files and boilerplate code, it just hooks into whatever logging framework you are already using. Serilog, Log4Net, and even just plain old ASP.NET Core logger factory are supported with very little setup.

Setting Up Logging

I’ve kind of sold the zero setup time a little bit here so let’s look at actually what’s required.

The first thing we have to do is install the nuget package for our particular logging framework. Now this might get complicated if you are using things like Serilog or Log4Net on top of the .NET Core logger, but for me, I’m just looking to pump all messages to the standard .NET Core output. So all I need to do is install the following two packages :

Install-Package PostSharp.Patterns.Diagnostics
Install-Package PostSharp.Patterns.Diagnostics.Microsoft

Next, I have to do a little bit of work in my program.cs to add the PostSharp logger :

public static void Main(string[] args)
{
    var host = CreateHostBuilder(args).Build();
    var loggerFactory = (ILoggerFactory)host.Services.GetService(typeof(ILoggerFactory));
    LoggingServices.DefaultBackend = new MicrosoftLoggingBackend(loggerFactory);
    host.Run();
}

This might seem a little complicated, but actually you’re just going to be copy and pasting this from the documentation from PostSharp, there actually isn’t much thought involved!

And that’s it! Now we can simply add the [Log] attribute to any method and have it log some pretty juicy stuff. For example, consider the following code :

[Log]
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    return Ok($"Hello {name}!");
}

With nothing but the log attribute, I suddenly see these sorts of messages popping up when I call a URL such as /Hello?name=Bob.

dbug: PostSharpLogging.Controllers.TestController[2]
      TestController.Hello("Bob") | Starting.
dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("Bob") | Succeeded: returnValue = {OkObjectResult}.

Notice how I now capture the method being executed, the parameters being executed, and what the result was. This can be incredibly important because not only are you capturing what methods are running, but you are capturing the input and output of those methods. This could be invaluable if you’re trying to debug under what circumstances a particular method fails or produces an unexpected response.

Writing Detailed APM Style Logging Messages

Earlier I spoke a little bit about how I thought PostSharp.Logging was more like a mini APM rather than a logging framework. That doesn’t mean it can’t log your standard text messages, but at the same time, it has incredible capability to “time” methods and capture exactly what’s going on in your application with very little set up.

All I need to do is create a file in the root of my project called postsharp.config. In it, I add the following :

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.postsharp.org/1.0/configuration">
  <Logging xmlns="clr-namespace:PostSharp.Patterns.Diagnostics;assembly:PostSharp.Patterns.Diagnostics">
    <Profiles>
      <LoggingProfile Name="Detailed" IncludeSourceLineInfo="True" IncludeExecutionTime="True" IncludeAwaitedTask="True">
      </LoggingProfile>
    </Profiles>
  </Logging>
</Project>

It may look confusing at first, but the PostSharp documentation gives you almost all of this out of the box. So what are we now adding to our logs?

  • Capturing the source line info (e.g. What line number is being executed).
  • Capturing the total execution time for a method.
  • Including awaited tasks (More on this later!). But this means that we can actually see when a task is really awaited which is invaluable to solving deadlock issues.

All of this is combined to create named logging profile called “Detailed”. Named profiles are handy because we can now change all of the logging for our project from this one configuration file, instead of going around and modifying Log attributes one by one.

It does mean that we have to modify our Log attribute to look like this :

[Log("Detailed")] // Pass in our log profile name
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    return Ok($"Hello {name}!");
}

And now if we run things?

dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("Bob") | Succeeded: returnValue = {OkObjectResult}, 
      executionTime = 0.40 ms, 
      source = {WeatherForecastController.cs: line 18}.

So now not only are we capturing the input and output, but we are also capturing the total execution time of the method as well as the actual line number of the code. If there was a particular input to this method that caused a slow down or a noticeable performance impact, then we would be able to capture that easily. In fact, let’s test that out now!

Capturing Performance Degradations With PostSharp Logging

I want to create an artificial delay in my application to test how PostSharp Logging identifies this. But before I do this, I want to explain a concept called “Wall Time”.

Wall Time is also sometimes called Wall Clock Time, or even just Real World Time. What it means is that if I’m timing the performance of my application, the only real metric I care about is the actual time a user sits there waiting for a response. So it’s the time from a user say, clicking a button, to actually seeing a response. We call this Wall Time or Wall Clock Time, because if there was a clock on the wall, we could use it to time the response. Now where this can deviate slightly when compared to things such as “CPU Time”. CPU Time refers to how much time the CPU actually spent completing your task. This may differ because the CPU may be juggling work, or it may delay your work because it’s processing someone else’s request, or you may even have an intentional delay in your code.

Confused? Maybe this simplified diagram will help.

Notice how our user in blue sent a request to the CPU, but it was busy servicing our user in red. Once it finished red’s tasks, it then swapped to blue. If you asked the CPU how long it spent working on blue’s task, it will give a very different answer to if you asked the blue user how long they waited. Both timing’s are important, but it’s an important distinction to make when you are building software for end users.

OK, so with that out of the way, why do I bring it up now? Well there is a very large APM product on the market right now that gives timings in CPU Time. While helpful, this was actually incredibly irritating because it doesn’t capture the time a user actually spent waiting. And there is a very easy test for this, and that is to use Task.Delay to simulate the CPU not doing work.

Let’s modify our code to look like so :

[Log("Detailed")]
[HttpGet("Hello")]
public async Task Hello([FromQuery]string name)
{
    if(string.IsNullOrEmpty(name))
    {
        return BadRequest("A name is required");
    }

    if(name == "wade")
    {
        await Task.Delay(1000);
    }

    return Ok($"Hello {name}!");
}

Now if I pass in the name “wade”, I’ll be forced to wait an extra 1000ms before I am given a response. So how does PostSharp log this?

dbug: PostSharpLogging.Controllers.TestController[16]
      TestController.Hello("wade") | Awaiting: asyncCallId = 1, awaitedMethod = Task.Delay
dbug: PostSharpLogging.Controllers.TestController[32]
      TestController.Hello("wade") | Resuming: asyncCallId = 1, awaitedMethod = Task.Delay
dbug: PostSharpLogging.Controllers.TestController[4]
      TestController.Hello("wade") | Succeeded: returnValue = {OkObjectResult}, executionTime = 1038.39 ms

Interesting, the first thing to note is that because I earlier turned on logging for awaited methods, I can now even see when a method is actually awaited, and when it’s resumed. This is really important when working with async/await because not every time you await a method, do you truly await it (But more on that in another post).

Most importantly, look at our execution time! 1038ms. PostSharp is indeed logging the execution time correctly as it pertains to wall time. This is exactly what we want. It may seem like something so simple, but as I’ve said, I know of APM products on the market right now that can’t get this right.

There’s still something more I want to do with this code however. We’re still logging an awful lot when really we just want to capture logging if the performance is degraded. And of course, PostSharp Logging provides us with this. If we modify our logging profile to look like so :

<LoggingProfile Name="Detailed" ExecutionTimeThreshold="200" IncludeSourceLineInfo="True" IncludeExecutionTime="True" IncludeAwaitedTask="True"> 
</LoggingProfile>

We set the ExecutionTimeThreshold to be 200ms. And anything over that we get :

warn: PostSharpLogging.Controllers.TestController[32768]
      TestController.Hello("wade") | Overtime: returnValue = {OkObjectResult}, executionTime = 1012.60 ms, threshold = 200 ms}.

Notice how this is a “Warn” message, not a debug message. Now we can perfectly isolation performance impacts to this particular input, rather than sifting through thousands of logs.

Logging Multiple Methods

Let’s say that you’ve already got a large existing project, but you want to add logging to all controller actions. If we used our code above, we would have to go through copy and pasting our Log attribute everywhere which could be quite the task. And again, if we ever want to remove this logging, we have to go through deleting the attribute.

But PostSharp has us covered with “Multicasting”. Multicasting is the ability to apply the attribute to multiple declarations using a single line of code. And best of all, it allows us to filter where we apply it by using wildcards, regular expressions, or even filtering on some attributes. That means it’s not an all or nothing approach. We can almost fine tune where we log just as well as if we were placing the Log attribute manually on each method.

To get started, create a file called “GlobalLogging.cs” and place it in the root of your project.

Inside, we’re gonna add the following :

using PostSharp.Extensibility;
using PostSharp.Patterns.Diagnostics;

[assembly: Log(AttributePriority = 1, 
    ProfileName = "Detailed",
    AttributeTargetTypes ="MyProjectName.Controllers.*", 
    AttributeTargetMemberAttributes = MulticastAttributes.Public)]

All we are saying is, add the Log attribute, with the ProfileName of “Detailed”, to all target types that are under the controllers namespace. I’m also going to add another filter to say only do this for public methods.

Running my project now, I receive all of the same logging on all of my controller methods, but without having to manually add the Log attribute!

Again, the simplicity of PostSharp stands out. We can add multiple of these global attributes to this file, all with specifically fine tuned wildcards/regexes, and just have it… work. I almost want to write more about all the options you can do with this, but it’s just all so simple and works out of the box, that I’m literally just giving one liners to completely re-invent your logging. It’s really great stuff.

Who Is This Library For?

If you’re working on a software stack that requires you to be constantly managing performance and fine tuning the system, then I think PostSharp Logging is kind of a no brainer. I think the name of “Logging” implies that all it’s really going to do is write text logs for you, but it’s so much more powerful than that.

I’ve used off the shelf APM products that don’t do as good of a job really isolating down to the method logging, and those come with a monthly subscription and a slow, lag ridden portal to boot. I think the bring-your-existing-logging-framework is one of the most powerful aspects of PostSharp, just being able to use what you already have, but supercharge those logs along the way.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently been doing battle trying to get Azure Application Insights playing nice with an Azure Function. Because they are from the same family I thought there wouldn’t be an issue but, Microsoft’s lack of documentation is really letting down the team here. This will be a short and sweet post that hopefully clears some things up.

Adding Application Insights

So the first thing that is different about using Application Insights with an Azure Function is that you don’t need any additional nuget packages. Under the hood, the packages that a function relies on out of the box themselves rely on the application insights package. So theoretically, everything is set up for you.

The only thing you actually need to do is set an application key of “APPINSIGHTS_INSTRUMENTATIONKEY” somewhere in your application.

For a function hosted on Azure, this is easy, you can do this on the configuration tab of your function and add your instrumentation key there.

Locally, you will be using either local.settings.json or appsettings.json depending on how your function is set up. Generally, either will work but it mostly depends on your individual project how you are managing settings locally.

Again, you don’t need to do anything to read this key, you just need to have it there and automagically, the function will wire everything up.

Now the other thing to note is that in the Azure Portal, on a Function, you’ll have an option to “Enable Application Insights” if you haven’t already. It looks a bit like so :

But actually all this does is add the instrumentation key to your appsettings. Just like we do above. It doesn’t do any fancy behind the scenes wiring up. It’s literally just a text field that wires everything up for you.

Configuring Application Insights For Azure Functions

So the next thing I found was that you were supposedly able to edit your host.json file of your function, and add in settings for insights. But what I found is that there is a tonne of settings that aren’t documented (yet?). The official documentation is located here : https://docs.microsoft.com/en-us/azure/azure-functions/functions-host-json. It looks good, but doesn’t seem to to have quite as many options for Application Insights as say, using it in a regular C# app.

So I actually had to dig into the source code. That took me here : https://github.com/Azure/azure-webjobs-sdk/blob/v3.0.26/src/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/ApplicationInsightsLoggerOptions.cs. These are the actual settings that you can configure, some of which you cannot find documentation for but can make some educated guesses on what they do.

For me, I needed this :

"dependencyTrackingOptions": {
    "enableSqlCommandTextInstrumentation" :  true
}

This enables Application Insights to not only capture that a SQL command took place, but capture the actual text of the SQL so that I can debug any slow queries I see happening inside the application.

Again, I couldn’t find any documentation on setting this variable up, except the original source code. Yay open source!

If It Doesn’t Work, Chances Are There Is A Bug

The other thing I noticed about Application Insights in general is that there are a tonne of bugs that hang around for much longer than you might expect. For example, when I first added my app insights key to my function, I wasn’t collecting any information about SQL queries coming from the app. Asking around, people just assumed maybe you had to add another nuget package for that, or that I had set something up wrong.

Infact, there is a bug that has been 3 – 6 months that certain versions of EntityFramework suddenly don’t work with App Insights. Insights would capture the correct request, but it wouldn’t log any SQL dependency telemetry with any version of EFCore above 3.1.4.

https://stackoverflow.com/questions/63053334/enable-sql-dependency-in-application-insights-on-azure-functions-with-ef-core
https://github.com/microsoft/ApplicationInsights-dotnet/issues/2032
https://github.com/Azure/Azure-Functions/issues/1613

How does this help you? Well it probably doesn’t unless specifically you are missing SQL queries from your App Insights. But I just want to point out that by default, out of the box, adding Application Insights to an Azure Function should capture *everything*. You do not have to do anything extra. If you are not capturing something (For example, I saw another bug that it wasn’t capturing HttpClient requests correctly), then almost certainly it will be the mishmash of versions of something you are using causing the problem.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.