Around 6-ish years ago, NodeJS really hit it’s peak in popularity. In part, it was because people had no choice but to learn Javascript because of the popularity of front end JS frameworks at the time, so why not learn a backend that uses Javascript too? But also I think it was because of the simplicity of building API’s in NodeJS.

Remember at the time, you were dealing with .NET Framework’s bloated web template that generated things like an OWIN pipeline, or a global.asax file. You had things like MVC filters, middleware, and usually we were building huge multi tier monolithic applications.

I remember my first exposure to NodeJS was when a company I worked for was trying to build a microservice that could do currency conversions. The overhead of setting up a new .NET Framework API was overwhelming compared to the simplicity of a one file NodeJS application with a single endpoint. It was really a no brainer.

If you’ve followed David Fowler on Twitter at any point in the past couple of years, you’ve probably seen him mention several times that .NET developers have a tendency to not be able to create minimal API’s at all. It always has to be a dependency injected, 3 tier, SQL Server backed monolith. And in some ways, I actually agree with him. And that’s why, in .NET 6, we are getting the “minimal API” framework to allow developers to create micro APIs without the overhead of the entire .NET ecosystem weighing you down.

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Introducing The .NET 6 Minimal API Framework

In .NET 5, top level programs were introduced which essentially meant you could open a .cs file, write some code, and have it run without namespaces, classes, and all the cruft holding you back. .NET 6 minimal API’s just take that to another level.

With the .NET 6 preview SDK installed, open a command prompt in a folder and type :

dotnet new web -o MinApi

Alternatively, you can open an existing console application, delete everything in the program.cs, and edit your .csproj to look like the following :

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>net6.0</TargetFramework>
  </PropertyGroup>
</Project>

If you used the command to create your project (And if not, just copy and paste the below), you should end up with a new minimal API that looks similar to the following :

using Microsoft.AspNetCore.Builder;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/hello", () => "Hello, World!");

app.Run();

This is a fully fledged .NET API, with no DI, no configuration objects, and all in a single file. Does it mean that it has to stay that way? No! But it provides a much lighter weight starting point for any API that needs to just do one single thing.

Of course, you can add additional endpoints, add logic, return complex types (That will be converted to JSON as is standard). There really isn’t much more to say because the idea is that everything is simple and just works out of the box.

Adding Dependency Injection

Let’s add a small addition to our API. Let’s say that we want to offload some logic to a service, just to keep our API’s nice and clean. Even though this is a minimal API, we can create other files if we want to right?!

Let’s create a file called HelloService.cs and add the following :

public class HelloService
{
    public string SayHello(string name)
    {
        return $"Hello {name}";
    }
}

Next, we actually want to add a nuget package so we can have the nice DI helpers (Like AddSingleton, AddTransient) that we are used to. To do so, add the following package but ensure that the prerelease box is ticked as we need the .NET 6 version of the package, not the .NET 5 version.

Next, let’s head back to our minimal API file and make some changes so it ends up looking like so :

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<HelloService>(new HelloService());

var app = builder.Build();

app.MapGet("/hello", (HttpContext context, HelloService helloService) => helloService.SayHello(context.Request.Query["name"].ToString()));

app.Run();

Here’s what we’ve done :

  • We added our HelloService as a dependency to our service collection (Much like we would with a full .NET API)
  • We modified our API endpoint to inject in our HttpContext and our HelloService
  • We used these to generate a response out, which should say “Hello {name}”. Nice!

We can obviously do similar things if we wish to load configuration. Again, you’re not limited by using the minimal API template, it’s simply just a way to give you an easier boilerplate for micro APIs that don’t come with a bunch of stuff that you don’t need.

Taking Things Further

It’s very early days yet, and as such, the actual layout and code required to build minimal API’s in .NET 6 is changing between preview releases. As such, be careful reading other tutorials out on the web on the subject, because they either become outdated very quickly *or* more often than not, they guess what the end API will look like, and not what is actually in the latest release. I saw this a lot when Records were introduced in C# 9, where people kinda “guessed” how records would work, and not how they actually did upon release.

So with that in mind, keep an eye on the preview release notes from Microsoft. The latest version is here : https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-6/ and it includes how to add Swagger to your minimal API. Taking things further, don’t get frustrated if some blog you read shares code, and it doesn’t work, just keep an eye on the release notes and try things as they come out and are available.

Early Adopter Bonus

Depending on when you try this out, you may run into the following errors :

Delegate 'RequestDelegate' does not take 0 arguments

This is because in earlier versions of the minimal framework, you had to cast your delegate to a function like so :

app.MapGet("/", (Func)(() => "Hello World!"));

In the most recent preview version, you no longer have to do this, *but* the tooling has not caught up yet. So building and running via Visual Studio isn’t quite working. To run your application and get past this error, simply use a dotnet build/dotnet run command from a terminal and you should be up and running. This is actually a pretty common scenario where Visual Studio is slightly behind an SDK version, and is just what I like to call an “early adopter bonus”. If you want to play with the latest shiny new things, sometimes there’s a couple of hurdles getting there.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Cyclomatic complexity is a measure of how “complex” a piece of code is by counting the number of unique paths someone could take to reach the end. Another way to think of it is how many unique unit tests would I have to write for a particular method to test all possible outcomes. The numbers of tests is what we would define as our cyclomatic complexity.

Cyclomatic Complexity Examples

Nothing beats a real world code example. So let’s take a look at a few.

public bool MyMethod(int myValue)
{
    return true;
}

The above piece of code has a cyclomatic complexity of exactly 1. Because there is only one way to pass through the code, top to bottom.

public bool MyMethod(int myValue)
{
    if(myValue % 2 == 0)
    {
        return true;
    }
    else
    {
        return false;
    }
}

Now our code has a cyclomatic complexity of 2. Because either the number we pass in is divisible by 2, and therefore we return true, or it’s not and we return false.

public bool MyMethod(int myValue)
{
    if(myValue < 0)
    {
        return false;
    }

    if(myValue % 2 == 0)
    {
        return true;
    }
    else
    {
        return false;
    }
}

The above example I included intentionally to illustrate that the cyclomatic complexity does not necessary depend on the unique outcomes, but instead the path any particular execution of the code can take. In the above code, the cyclomatic complexity is 3, even though two of the paths return false.

Why Does It Matter?

The most obvious next question is, why does it matter? Or even does it matter at all? And it’s actually somewhat hard to answer. As we will talk about soon, I’m not sure there is a perfect “threshold” where we can say, no method should be above this, but at the same time, cyclomatic complexity can be a good sign that things need to be broken up into smaller parts. Certainly it’s a sign that a method is maybe complex to follow by any new developer reading the code for the first time. It’s also typically a sign that any manual or automated testing may require several different tests to pass through the same method, testing all possible pathways and outcomes.

Furthermore for unit testing, cyclomatic complexity generally leads to more test “set up” to make sure the stars align so you can test a specific outcome. The more set up you have to do for a unit test, often the more brittle a test will become and any small change can make your house of cards fall down, even if your particular test isn’t testing that scenario.

Let me give you an example piece of code :

public Person GetPerson(int personId)
{
    if(getCurrentUserRole() != UserRole.Admin)
    {
        throw UnauthorizedException();
    }

    var person = personRepository.GetPerson(personId);
    if(person == null)
    {
        throw NotFoundException();
    }

    return person;
}

If I write a unit test to check that when a person is not found, I throw a NotFoundException(), I first have to ensure that the underlying method to getCurrentUserRole() returns an admin. Even though my unit test is trying to test a specific outcome (NotFoundException), I have to set up my unit test to first handle the pre-conditions. Any pre-conditions added to this method (For example, maybe I also want to ensure that the PersonId is more than 1), then need to be accounted for in my unit test even though again, the unit test itself is not testing the pre-conditions.

While this in of itself is not an example of cyclomatic complexity (Probably more in the realms of the SOLID Principles), cyclomatic complexity adds to this because inevitably you are testing a single condition at what could be a long train of conditions before it.

Cyclomatic Complexity “Thresholds”

If you’ve ever used a static analysis tool such as SonarQube or NDepend before, you’ve probably seen cyclomatic complexity (Or it’s sibling cognitive complexity) thresholds thrown at you. The two common numbers are either 10 or 15. I don’t necessarily think that there is any particular magic number that you should stay under. But around 10 unique pathways through a method should tell you that you do need to break things up if at all possible.

Lowering Cyclomatic Complexity

The most common way I see developers try and break up large methods is actually by creating many private methods or sub routines that then get called, in sequence, from the original method. Arguably, this has not lowered the complexity of the method and if anything, has simply made it harder to understand by any reader.

The most common methods for lowering complexity are :

  • Refactoring or Reordering a series of IF/Else statements to better control flow, exit early, or re-use certain if/else statements.
  • Using design patterns such as the Strategy Pattern to swap implementations on the fly (For example using Dependency Injection) rather than relying on if/else statements
  • Similar to the above, using factory patterns to offload logic for creating objects to a centralized place, and reducing the complexity within individual methods
  • Where it makes sense, creating re-usable helpers and services that encapsulate functionality or logic away from individual private methods.

Again, most of the above is simply moving the logic away from one method into another which in some cases, does not reduce your application complexity at all. But in those cases, it can often improve the readability of your code, and make it simpler to understand for the next developer.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

With anonymity becoming more and more important each day on the internet, people are turning to VPNs, gateways and proxies to encrypt and hide their identity on the internet. An extremely common protocol for doing this is known as SOCKS proxies.

SOCKS proxies came out (Or atleast the protocol was defined) in the ’90s. But would it surprise you to know that .NET has never supported SOCKS proxies? In fact, since .NET was open sourced on Github, there has been an open issue for adding SOCKS support since 2016 : https://github.com/dotnet/runtime/issues/17740

Well that’s now all in the past, Finally support for SOCKS proxies in .NET is here! Well, in .NET 6 anyway.

Getting Setup With .NET 6

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Basic Usage

The usage for a SOCKS proxy is actually the same as HTTP proxy usage with HttpClient, except that you can now specify the scheme of the address as socks. For example :

var proxy = new WebProxy();
proxy.Address = new Uri("socks5://127.0.0.1:8080");
//proxy.Credentials = new NetworkCredential(); //Used to set Proxy logins. 
var handler = new HttpClientHandler
{
    Proxy = proxy
};
var httpClient = new HttpClient(handler);

What that generally means is while documentation may be slow on actually supporting SOCKS in .NET, if you find any documentation on using proxies with a .NET HttpClient, it should be pretty much identical.

I’d love to write more about this feature but it really is that simple!

Using SOCKS with HttpClient Factories

Since .NET Core 2.1, the usage of HttpClient’s has been pushed towards using HttpClient Factories instead of new’ing up instances yourself. For more information on HttpClient Factories, you can read our helpful guide here : https://dotnetcoretutorials.com/2018/05/03/httpclient-factories-in-net-core-2-1/

Obviously if you are using HttpClient factories, you need a way to specify an HttpClientHandler somewhere in that configuration.

There’s a couple of things to note. First, if you are using named configurations, you can specify a PrimaryHttpMessageHandler yourself that configures your proxy such as :

services.AddHttpClient("proxyClient")
    .ConfigurePrimaryHttpMessageHandler(() => 
    {
        var proxy = new WebProxy();
        proxy.Address = new Uri("socks5://127.0.0.1:8080");
        return new HttpClientHandler
        {
            Proxy = proxy
        };
    });

Again, I want to stress that this is only because I specified a named factory.

This also works if you use a strongly typed factory such as so :

services.AddHttpClient<IMyClient, MyClient>()
    .ConfigurePrimaryHttpMessageHandler(() => 
    {
        var proxy = new WebProxy();
        proxy.Address = new Uri("socks5://127.0.0.1:8080");
        return new HttpClientHandler
        {
            Proxy = proxy
        };
    });

However, if I try and configure the *default* HttpClient factory with no naming, and no types, like so :

services.AddHttpClient()
    .ConfigurePrimaryHttpMessageHandler(() => 
    {
        var proxy = new WebProxy();
        proxy.Address = new Uri("socks5://127.0.0.1:8080");
        return new HttpClientHandler
        {
            Proxy = proxy
        };
    });

I instead get the following error :

'IServiceCollection' does not contain a definition for 'ConfigurePrimaryHttpMessageHandler' and the best extension method overload 'HttpClientBuilderExtensions.ConfigurePrimaryHttpMessageHandler(IHttpClientBuilder, Func<HttpMessageHandler>)' requires a receiver of type 'IHttpClientBuilder'

This is because .NET does not allow you to override the default primary message handler for the default client. If you need to specify a proxy at this level, you need to use named instances.

Again, to make sense of the above you need to read our guide on HttpClientFactories as it really is a new way of using HttpClient’s that only got introduced in the past couple of years. Our guide can be found here : https://dotnetcoretutorials.com/2018/05/03/httpclient-factories-in-net-core-2-1/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

The past few days I’ve been setting up a SonarQube server to do some static analysis of code. For the most part, I was looking for SonarQube to tell us if we had some serious vulnerabilities lurking anywhere deep in our codebases, especially some of the legacy code that was written 10+ years ago. While this sort of static analysis is pretty limited, it can pick up things like SQL Injection, XSS, and various poor security configuration choices that developers sometimes make in the heat of the moment.

And how did we do? Well.. Actually pretty good! We don’t write raw SQL queries, instead preferring to use EntityFramework Linq2SQL, which for the most part protects us from SQL injection. And most of our authentication/authorization mechanisms are out of the box .NET/.NET Core components, so if we have issues there… Then the entire .NET ecosystem has bigger problems.

What we did find though was millions of non-critical warnings such as this :

Unused "using" should be removed

I’ll be the first to admit, I’m probably more lenient than most when it comes to warnings such as this. It doesn’t make any difference to the code, and you rarely notice it anyway. Although I have worked with other developers who *always* pull things like this up in code reviews, so each to their own!

My problem was, SonarQube is right, I probably should remove these. But I really didn’t want to manually go and open each file and remove the unused using statements. I started searching around and low and behold, Visual Studio has a feature inbuilt to do exactly this!

If you right click a solution or project in Visual Studio, you should see an option to “Analyze and Code Cleanup” like so :

I recommend first selecting “Configure Code Cleanup” from this sub menu so that we can configure exactly what we want to clean up :

As you can see from the above screenshot, for me I turned off everything except removing unnecessary usings and sorting them. You can of course go through the other available fixers and add them to your clean up Profile before hitting OK.

Right clicking your Solution/Project, and selecting “Analyze and Code Cleanup” then “Run Code Analysis” will instantly run these fixers over your entire project or solution. Instantly letting you pass this pesky rule, and cleaning up your code at the same time!

Now I know, this isn’t exactly a big deal removing unused usings. I think for me, it was more the fact I didn’t even know this tool existed right there in vanilla Visual Studio.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

When it comes to developers talking about Entity Framework (and I’m including EF Core in this), I find there is very much two camps. There are the developers who find EF intuitive and easy to use, and those who say it is absolute hell to manage. But what I started finding was that it all depended on the “approach” to using Entity Framework. By approach, I mean whether to use Code First, Database First, or Model First.

There is some historical damage mixed in here because even though I include Model First in this list, the support for this approach was dropped some years back. But people who endured the days of using that annoying WYSIWYG editor in Visual Studio to manage their database to this day will tell you that EntityFramework is a pile of junk and they will never touch it, even though things like Code First has improved leaps and bounds from where it was.

I’m not going to play off these approaches against each other for the most part, but I want to explain what each one actually means, and why people’s perception of Entity Framework can be very different depending on their usage.

What Is Entity Framework Code First?

If you’ve used EF Core or later versions of Entity Framework, you’ve probably used “Code First”. The “Code First” approach refers to scaffolding out your database using C# classes, and then using entity framework to “migrate” your database using commands such as :

Update-Database

or

dotnet ef database update

Because everything is managed from C# code, when you want additional SQL types such as indexes, constraints or even just complex foreign keys, you must use either the Data Annotation attributes or the Entity Framework Fluent Configuration. Infact, we have a pretty good guide (If we do say so ourselves), on some really clean ways to approach configuring your database using a Code First approach here : https://dotnetcoretutorials.com/2020/06/27/a-cleaner-way-to-do-entity-configuration-with-ef-core/

The main thing to understand about Code First is that because you are scaffolding the database using C# code, it works well for developers, but not for DBAs or Architects. It can be hard to map out a database schema to the 9th degree when everything is written in C#. Additionally, if you have DBA’s who love things like stored procedures or views, while these are doable in Code First, it can become unwieldy very fast.

The benefits of using a Code First approach are numerous :

  • Your code is always talking to a database schema that it has control over, therefore there is rarely a “mismatch” in column types, table definitions etc.
  • You have an out of the box way to manage database migrations (So no dodgy “run this script before go live” type go live run sheets)
  • Almost anyone can scaffold a database if they know C#, even without having too much knowledge of SQL in general (Although.. this can be a negative)
  • A code first approach works across different databases such as Postgres, MySQL etc

In general, you’re probably going to use a Code First approach for almost every new .NET project these days. Even if the end goal is to manage the database in a different way post go live, it’s such a rapid prototyping tool that scales pretty well, so you’re going to want to be using it.

What Is Entity Framework Database First?

Database First is an interesting approach because it use to be a lot more prevalent. Whether it was called database first before, or just datasets in general if you come from a WinForms background, it simply refers to getting your application to talk to a database that already exists.

I’ll give you an example. If there is a database of a production system, that already has say 50+ tables, and each of those tables has your usual columns, foreign keys, and constraints. If we took a code first approach there is a couple of problems :

  • We have to manually type out each definition of the tables we want to use, and we have to do our best to line them up perfectly.
  • We can’t use code first migrations because the database already exists and is likely to be under management in a different way. While not a problem, one of the huge boons of using Code First is lost on us.

So *could* we write a code first approach, but it’s somewhat lying to ourselves that the code came first.

So instead, we can use a Database First approach. All this does is creating what we would have done manually in code, but it does it by inspecting the database and essentially creating the classes for us. If you use certain ORM’s back in the day, this was fairly common to use an “auto generated” type database ORM that you could re-generate each time the database changed.

To do this, all we have to do is run the following command :

Scaffold-DbContext "MyConnectionStringHere"

And Entity Framework will do the rest for us. We can run this command again over time to pick up the latest changes to our database schema if we need to. Because of this, it is not advisable to ever edit the classes that Entity Framework generates for us as these will be overwritten the next time a scaffolding is run.

The official documentation from Microsoft has moved the “Database First” approach under a page titled “Reverse Engineering” which is pretty apt : https://docs.microsoft.com/en-us/ef/core/managing-schemas/scaffolding

When we go and talk about Model First approaches shortly, it should also be noted that you could use a Database First approach even when you were using a Model as well. All Database First refers to is auto generating your schema in Visual Studio, whether that be C# classes or an EDMX file.

What Is Entity Framework Model First?

If you’ve ever talked to someone who used Entity Framework five-ish years ago, they will talk about the “designer” and the “edmx file” and how it was a slow, laggy, buggy piece of software. And they weren’t wrong.

Here’s what it looked like :

It was essentially a designer for building out a database schema, from inside Visual Studio. The problem was that the designer was very very laggy when you got over a couple of dozen tables. This designer however was all built over this thing called an “edmx” file. An edmx was essentially this huge XML file that described the database schema. It looked a bit like so :

What would often happen is that instead of using the laggy designer to make changes, people would edit this XML directly to make changes, then if they could, open the designer check if it looked right before checking in their change. It was often a joke around the office if you accidentally opened the designer because at best, your Visual Studio could be frozen for a few minutes, and at worst it would be frozen for several minutes then just crash.

The EDMX file wasn’t the only file of the Model First approach. Based on the EDMX, Visual Studio would also auto generate other C# files to tie into your code that probably wouldn’t be too dissimilar from the code files we use when doing a Code First approach, except they were an auto generated mess.

Other than the laggy designer, there were some other issues with a Model First approach :

  • It did not do Database Migrations, you often had to use a third party tool for that.
  • When working with a larger team, because the entire database is in one big EDMX file, merge conflicts were rife and a nightmare to work out
  • Some of the auto generated files wouldn’t happen automatically, you would have to right click your EDMX and then select “Generate” etc. If you forgot this, then things went haywire.

In general, the Model First died off some years back and was never added back to EF Core, so it’s unlikely you’ll run into it these days. But it’s still good to know that when people are complaining about how bad Entity Framework is with the “edmx” file, they are talking about a legacy offering, not today’s Code First Entity Framework.

Which Approach To Use?

These days, it’s a fairly simple equation.

  • If you are starting a new project, with a new database, use EF Core Code First
  • If you are starting a new project, but there is an existing database you want to talk to, use EF Core Database First.
ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

When it comes to unit testing, I’m a lover of mocks. I can’t get enough of them. Infact, I wrote an entire “TestingContext” helper to help me “auto mock” my classes which I think you should definitely check out if you haven’t already! I can’t think of a unit testing project that doesn’t utilize the .NET library Moq in some way.

But! In very rare cases, I use stubs, and in even rarer cases (But still sometimes), I use fakes. Often when you are trying to test “framework” code such as creating a custom ASP.NET Core Filter or Middleware, you have no option but to use stubs and/or fakes.

Now the thing is, I don’t want to get into the war of mocks vs stubs vs fakes, because frankly… Mocks wins hands down for me (Although if you google this, you’ll find thousands of people that disagree), but I do want to touch on exactly what is the difference between the three. When I talk to developers who are saying “I love fakes, I always use fakes”, and then I start probing them to elaborate, or maybe check their code, what I find is that either they are using stubs/mocks instead, or maybe even a combination of all three. And so hopefully this guide can help you understand what each of these “test doubles” actually are, then you can go onward to your holy war against the one you don’t like against your peers!

What Is A Fake?

A fake is an object used in place of a concrete implementation that has some “smarts” to it. Usually a shortcut implementation that makes it useful across different unit tests, but stops short of being an integration test.

By far the most common example I see of this is in data repositories. Let’s say I have a standard SQL Server repository that looks like so :

public interface IUserRepository
{
    void Insert(object user);
    List<object> GetAllUsers();
}

public class UserRepository : IUserRepository
{
    public List<object> GetAllUsers()
    {
        //Go to the database and fetch all users. 
    }

    public void Insert(object user)
    {
        //Insert a user into the database here. 
    }
}

You’ll have to use your imagination when it comes to the actual implementation part, but essentially we have two methods that call into a database. We have the Insert which adds users, and the GetAllUsers which returns all users in the database.

When it comes to unit testing a class that may utilize IUserRepository, for example a UserService, we have a bit of a problem. We don’t want our unit tests reaching out to the database, and frankly we don’t really care about the implementation of UserRepository. So we create a fake :

public class FakeUserRepository : IUserRepository
{
    private List<object> _users = new List<object>();

    public List<object> GetAllUsers()
    {
        return _users;
    }

    public void Insert(object user)
    {
        _users.Add(user);
    }
}

In our fake, we actually take the inserted user, and add it to an internal list. When GetAllUsers is called, we return that same list. Now whenever a unit test requires a call to IUserRepository, we can supplement in the FakeUserRepository, and instantly things just “work”.

The main thing here is to understand that the Insert affects the GetAllUsers call. It’s a “real” implementation that actually performs like a repository would, just without an actual database behind the scenes.

What Is A Stub?

A stub is an implementation that returns hardcoded responses, but does not have any “smarts” to it. There is no tying together of calls on the object, instead each method just returns a pre-defined canned response.

Let’s look at how we would create a stub for the above :

public class StubOneUserRepository : IUserRepository
{
    public List<object> GetAllUsers()
    {
        return new List<object>();
    }

    public void Insert(object user)
    {
        //Do nothing
    }
}

So it’s somewhat similar to our fake but… Not quite. See how the insert does not affect the GetAllUsers, and the GetAllUsers itself returns a canned response with nothing in it. Anything I do to this object during a test doesn’t change how it functions.

What we generally find is that Stubs are used to fulfil a condition inside the code, but not to test the functionality. For example if my code calls “Insert” on the repository, but I don’t really care about what happens with that data for my particular test, then a stub makes sense rather than the overhead of filling out how a fake would work.

Using a repository makes this example harder than it should be because Repositories invariably should return dynamic data to test various conditions in your code. So let me use another example that would be more likely to require a stub in the real world.

Let’s say I have an interface to tell me if a user is “authenticated” or not. It would look like so :

public interface IUserAuthenticatedCheck
{
    bool IsUserAuthenticated();
}

Now let’s say for my tests, I just always need the user to be authenticated, maybe to fulfil some underlying framework condition. I could use a stub like so :

public class StubUserAuthenticatedCheckTrue : IUserAuthenticatedCheck
{
    public bool IsUserAuthenticated() => true;
}

So no smarts on whether a user should be authenticated, no changing of the value, just a straight “always return true” method. This is what stubs are great at.

What Is A Mock?

A mock is a pre-programmed object that can have dynamic responses/behaviour defined as part of the test. They do not need to be concrete in implementation and (generally) don’t need behaviour to be shared amongst tests.

So where would we use Mocks? Well it’s anywhere you want to be relatively dynamic, for that particular test, to satisfy conditions. As an example, let’s say I am writing a test that calls out to the following interface :

public interface IShopService
{
    bool CheckShopIsOpen(int shopId);
}

So all we are doing is checking if a shop is open or closed. The actual implementation class for this may call out to a database, or a webservice/api of some sort, but we don’t want to do that as part of our unit test.

If we used fakes here, we would need to add some dummy method to be able to say whether a shop should be open or closed. Maybe something like this :

public class FakeShopService : IShopService
{
    public bool ShouldShopBeOpen { get; set; }

    public bool CheckShopIsOpen(int shopId)
    {
        return ShouldShopBeOpen;
    }
}

Eh, not great in my eyes. We are adding new methods just to be able to control whether a shop is open or closed for the test.

If we used stubs, we would have to hardcode a response of true/false right into the concrete class. Maybe even something like this :

public class StubShopService : IShopService
{
    private Dictionary<int, bool> _shops = new Dictionary<int, bool>
    {
        { 1, true },
        { 2, false }
    };

    public bool CheckShopIsOpen(int shopId)
    {
        return _shops[shopId];
    }
}

This works with a predefined list of ids, and whether a shop will be open or closed. It’s nice, but if you use this in a test and pass in the id of 1, it’s not immediately clear from the test why you got the response of true right?

So how would you solve this using mocks? You would write something like this right there in your test (I am using the Moq library for this!) :

var _mockShopService = new Mock<IShopService>();
_mockShopService.Setup(x => x.CheckShopIsOpen(1)).Returns(true);

This is so clear when it’s right there in your test, when our mock is used, and CheckShopIsOpen with an id of 1, then we return true. It’s also specific to this test, and doesn’t force us to hardcode anything anywhere, or create concrete classes.

And when we have a test that requires the shop id of 1 to be false..

_mockShopService.Setup(x => x.CheckShopIsOpen(1)).Returns(false);

Pretty easy stuff!

When To Use Mocks vs Fakes vs Stubs

I know I said I wasn’t going to do this –  I just wanted to explain the differences between the three methods, but.. I feel I have to atleast throw my 2 cents in here on when to use each one. And I’ll let the comments on this post fall as they may!

  • Use Fakes when you want a re-usable concrete implementation that works similar to the real implementation with re-usability across tests (e.g. In Memory Database)
  • Use Stubs when you want a hardcoded response/implementation that will be re-used across tests
  • Use Mocks when you need dynamic responses for individual tests, that may not necessarily require re-usability across tests

If I’m being honest.. I use Mocks even for things like Repositories, it just makes tests for me very readable and very easy to follow when I’m explicitly outlining what’s going to happen. But maybe that’s just me!

For now, hopefully you understand a little better what each of these test doubles are used for and maybe a little more of a nudge on when to use each one.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Ever since I started using constructor dependency injection in my .NET/.NET Core projects, there has been essentially a three step process to adding a new dependency into a class.

  1. Create a private readonly field in my class, with an underscore prefix on the variable name
  2. Edit the constructor of my class to accept the same type, but name the parameter without a prefix
  3. Set the private readonly field to be the passed in parameter in the constructor

In the end, we want something that looks like this :

public class UserService
{
    private readonly IUserRepository _userRepository;

    public UserService(IUserRepository userRepository)
    {
        _userRepository = userRepository;
    }
}

Well at least, this used to be the process. For a few years now, I’ve been using a nice little “power user” trick in Visual Studio to do the majority of this work for me. It looks a bit like this :

The feature of auto creating variables passed into a constructor is actually turned on by default in Visual Studio, however the private readonly with an underscore naming convention is not (Which is slightly annoying because that convention is now in Microsoft’s own standards for C# code!).

To add this, we need do the following in Visual Studio. The exact path to the setting is :

Tools => Options => Text Editor => C# => Code Style => Naming

That should land you on this screen :

The first thing we need to do is click the “Manage naming styles” button, then click the little cross to add. We should fill it out like so :

I would add that in our example, we are doing a camelCase field with an underscore prefix, but if you have your own naming conventions you use, you can also do it here. So if you don’t use the underscore prefix, or you use kebab casing (ew!) or snake casing (double ew!), you can actually set it up here too!

Then on the naming screen, add a specification for Private or Internal, using your _fieldName style. Move this all the way to the top :

And we are done!

Now, simply add parameters to the constructor and move your mouse to the left of the code window to pop the Quick Actions option, and use the “Create and Assign Field” option.

Again, you can actually do this for lots of other types of fields, properties, events etc. And you can customize all of the naming conventions to work how you like.

I can’t tell you how many times I’ve been sharing my screen while writing code, and people have gone “What was that?! How did you do that?!”, and by the same token, how many times I’ve been watching someone else code and felt how tedious it is to slowly add variables one by one and wire them up!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a short post, but one I felt compelled to write after I saw some absolutely bonkers ways of validating emails in a .NET Core API. I recently stumbled upon a war between two developers who were duking it out on a pull request/code review. It all centred around the “perfect” regex for validating an email.

And you may be thinking, isn’t it use [email protected]? Well.. Apparently not. Just check out this rather verbose stackoverflow answer here on the subject : https://stackoverflow.com/a/201378/177516

The answer given has the regex looking a bit like so :

(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])

…not the most concise.

Another example might be if we take a look at how Angular validates email. Also with a Regular Expression found here : https://github.com/angular/angular/blob/master/packages/forms/src/validators.ts#L98

And it looks a bit like so :

^(?=.{1,254}$)(?=.{1,64}@)[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$

A little bit different, but still a pretty massive regex pattern. So, given these options (And probably many many more), which should we copy and paste into our validation for our model?

public class CreateAccountViewModel
{
	[RegularExpression("SoMeCrAzYReGeX")]
	public string Email { get; set; }
}

The answer is none of the above. .NET Core (And .NET Framework) have an inbuilt validator for emails like so :

public class CreateAccountViewModel
{
	[EmailAddress]
	public string Email { get; set; }
}

Nice and simple without much fuss. But the question then is, what Regex does .NET Core/.NET 5+ use out of the box? The answer is.. It doesn’t use regex at all!

The logic is actually rather simple :

  • Does the value have an @ symbol?
  • Is the @ symbol in any position but the first or last index of the string

No regex required!

Is this a perfect validator? Probably not, it probably allows through emails that aren’t quite up to spec with the email address RFC, but it does enough to catch the 99.99%. So next time people are arguing over the perfect email regex, maybe the answer is to not use regex at all!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Over the past year or so, I’ve been taking a look at the PostSharp framework. It started with looking at how complex multi-threaded scenarios can be handled with PostSharp Threading, then I took a dive into creating a mini-APM logger with PostSharp Logging. Since doing so, I reached out to people I’ve worked with, both past and present, if they’ve had experience using PostSharp, and the response was overwhelmingly positive. And the feedback kept coming back to the same thing time and time again, that PostSharp just takes care of super common scenarios without having to work from scratch every time.

And that brings me to the PostSharp Caching library. At some point, on every project, a developer is going to implement caching. And as the saying goes…

There are two hard things in Computer Science : cache invalidation, naming things, and off-by-one errors.

PostSharp unfortunately can’t help with the latter two, but it can make the “simple at first, complex over time” act of caching and cache invalidation an absolute breeze.

Caching With PostSharp Attributes

Before jumping to invalidating our cache, we need to actually have items cached to begin with. I’m going to set up a simple console application like so :

static async Task Main(string[] args)
{
    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
}

static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

I’ve added in a Task.Delay in our SayHello method, purely to demonstrate a slow function and make our caching more obvious. It’s meant to simulate maybe a very slow and unavoidable database call – e.g. Something you may want to cache in the first place!

When we run this, it’s pretty obvious what’s going to happen.

Hello Wade! - 1019
Hello Wade! - 2024
Hello John! - 3031
Hello Wade! - 4043

Between each call to SayHello, there’s about a 1 second delay.

Now let’s add PostSharp. First we need to use our Package Manager Console to install the PostSharp Caching library :

Install-Package PostSharp.Patterns.Caching

Then we just have to add two lines :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
}

//Cache this method
[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

We just have to set up what our caching backend will be (In this case, just a memory cache). And then do nothing more but add the [Cache] attribute to any method we wish to cache the result of.

Running this again :

Hello Wade! - 1077
Hello Wade! - 1080
Hello John! - 2102
Hello Wade! - 2102

Perfect! We can see that the first attempt to SayHello with the param “Wade” took 1 second, but each subsequent time was almost instant (From the cache). Interestingly though, we can see that when we pass in a different name, in our case “John”, it took the full second. That’s because PostSharp Caching takes into account the parameters of the method, and creates unique caches based on the input. Pretty impressive stuff, but it doesn’t stop there!

PostSharp Caching Is Highly Configurable

Of course, the above is just a simple example. We can extend this out to include things like absolute expiration in minutes :

[Cache(AbsoluteExpiration = 5)]

Or sliding expiration :

[Cache(SlidingExpiration = 5)]

Or you can even have a hierachy where you can place a CacheConfiguration on an entire class :

[CacheConfiguration(AbsoluteExpiration = 5)]
public class MyService
{
    [Cache]
    public string HelloWorld()
    {
        return "Hello World!";
    }
}

What I’m trying to get at is that things are highly configurable. You can even configure caching (Including disabling caching completely) at runtime using Caching Profiles like so :

//Make all caching default to 5 minutes. 
CachingServices.Profiles.Default.AbsoluteExpiration = TimeSpan.FromMinutes(5);
//On second thought, let's just disable all caching for the meantime. 
CachingServices.Profiles.Default.IsEnabled = false;

So stopping short of giving you the laundry list of configuration options, I really just want to point out that this thing is super configurable. Everytime I thought “Well what about if you want to….” the documentation was there to give me exactly that.

Controlling Cache Keys

As we’ve seen earlier, our cache key is automatically build around the parameters of the method we are trying to cache. It’s actually a little more complicated than that, it uses the enclosing type (e.g. a ToString() on the enclosing class), the method name, and the method parameters. (And even then, it actually uses more, but let’s just stick with that for now!).

Imagine I have code like so :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade", Guid.NewGuid())) + " - " + watch.ElapsedMilliseconds);
    Console.WriteLine((await SayHello("Wade", Guid.NewGuid())) + " - " + watch.ElapsedMilliseconds);
}

[Cache]
static async Task<string> SayHello(string name, Guid random)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

Yes I know it’s a rather obtuse example! But the fact is the random guid being passed into the method is breaking caching. When I run this I get :

Hello Wade! - 1079
Hello Wade! - 2091

But if it’s a parameter that I really don’t care about as part of caching, I want to be able to ignore it. And I can! With another attribute of course :

[Cache]
static async Task<string> SayHello(string name, [NotCacheKey]Guid random)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

And running this I get :

Hello Wade! - 1079
Hello Wade! - 1082

While this is a simple example, I just wanted to bring up the fact that while PostSharp Caching is super easy to implement and can work right out of the box, it’s also crazy extensible and there hasn’t been a scenario yet that I’ve been “stuck” with the library not being able to do what I want.

Invalidating Cache

For all the talk my intro did about invalidating cache being hard.. it’s taken us a while to get to this point in the post. And that’s because PostSharp makes it a breeze!

Your first option is simply annotating the methods that update/insert new values (e.g. where you would normally invalidate cache). It looks a bit like this :

static async Task Main(string[] args)
{
    //Set a default backend (In our case a memory cache, but more on that later)
    CachingServices.DefaultBackend = new MemoryCachingBackend();

    Stopwatch watch = new Stopwatch();
    watch.Start();
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);
    UpdateHello("Wade");
    Console.WriteLine((await SayHello("Wade")) + " - " + watch.ElapsedMilliseconds);

    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
    UpdateHello("Wade");
    Console.WriteLine((await SayHello("John")) + " - " + watch.ElapsedMilliseconds);
}

[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

[InvalidateCache(nameof(SayHello))]
static void UpdateHello(string name)
{
    //Do something here. 
}

A long example but I wanted to point something out. That when you ask to invalidate the cache, it takes the parameters of your update method to match the keys of your cached method. So in this case, because I Update the name “Wade” twice, it only ever clears that cache key. It doesn’t simply wipe the entire cache for the entire method.

So our output becomes :

Hello Wade! - 1062
Hello Wade! - 2092
Hello John! - 3111
Hello John! - 3111

But maybe the whole attribute thing isn’t for you. You can actually invalidate cache imperatively also like so :

[Cache]
static async Task<string> SayHello(string name)
{
    await Task.Delay(1000);
    return $"Hello {name}!";
}

[InvalidateCache(nameof(SayHello))]
static void UpdateHello(string name)
{
    //Do some work

    //Now invalidate the cache. 
    CachingServices.Invalidation.Invalidate(SayHello, name);
}

What I love about this is that I’m passing a reference to the method name, and the value of the parameter. Even though I’m invalidating a specific cache item, I’m still not having to work out what the actual cache key is. That means should the way in which we build the cache key for SayHello ever change, the invalidation of the cache never changes because it’s an actual strongly typed reference to the method.

Obviously the added benefit of invalidating cache imperatively is that you can both conditionally define logic on when you want to invalidate, and be able to invalidate cache from inside an existing method without adding attributes. That being said, the attributes are really useful if a lot of your logic is CRUD interfaces.

Distributed Caching Backends

I don’t want to go on too long about caching backends because, for the most part, you’re going to be using Redis for distributed cache, and a local in-memory cache if you just want something inside your one instance of your application.

But I did want to mention one feature that I’ve had to implement myself many times when using other caching libraries. That is, the combination of using a Redis Server for distributed cache, but *also* a local in memory cache. The reason being, if my application is horizontally scaled, of course I want to use Redis so that everyone can share the same cached entities. However, fetching from Redis incurs a level of overhead (namely network) to go and fetch the cache each time. So what I’ve generally had to implement myself, is keeping a local store of in-memory cache for faster local retrieval, as well as managing the Redis distributed cache for other machines.

But of course, PostSharp caching makes this a breeze :

RedisCachingBackendConfiguration redisCachingConfiguration = new RedisCachingBackendConfiguration();
redisCachingConfiguration.IsLocallyCached = true;

In general, the caching backends used in PostSharp are actually very extensible. While Redis and In-Memory serves me just fine, you could implement PostSharps interface to add in your own backend as well (For example SQL). Then you get the power of the attributes and all of the goodness of PostSharp in code, but with your own one off caching backend.

Who Is This Library For?

When I recommend libraries or products, I like to add a little bit around “Who is this for?”. Because not everything I use works in small start ups, or vice versa, in large enterprises. However, I really believe PostSharp Caching has the ability to fit into almost any product.

Early in my development career, I thought that I could develop everything myself. And all it really meant was taking away time from the features my customers really cared about, and devoting time to re-inventing the wheel on some boilerplate code. When I thought not only about putting a dollar amount on my time, but the opportunity cost lost of not having additional features, it suddenly made so much more sense to leave boilerplate code, like caching, to people that actually dedicate their time to getting it right. PostSharp Caching is one of those products, one that you can just plug in, have it work right away, and save your time for features that actually matter.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently ran into a problem where my .NET API was returning an error 415. The full error gives you a hint as to what the actual issue is : “415 Unsupported Media Type”, although this can lead you down a wild goose chase of stackoverflow answers.

In short, the API is expecting a post request with a particular content-type header, but the caller (Or maybe your front end) is using a different media type. There are actually some other gotchas that are incredibly frustrating to figure out in .NET too that can blow this entire thing up without you noticing. But let’s get on to it!

Check Your Front End Caller

The first thing we need to do is understand what our API is expecting. In general, API’s these days are expecting JSON requests. In some cases, they are expecting a classic “form post”. These are not the same thing! But whichever you use, your front end caller (Whether that be a javascript library or another machine), must attach the correct content type when making a request to the API.

For example, if I have a JSON API, and I make the following call from jQuery :

 $.ajax({
  url: '/myapiendpoint',
  type: 'POST'
});

This actually won’t work! Why? Because the default content-type of an Ajax request from jQuery is actually “application/x-www-form-urlencoded”, not “application/json”. This can catch you out if you aren’t familiar with the library and it’s making calls using the default content-type.

But of course, we can go the other way where you copy and paste someone’s helpful code from stackoverflow that forces the content-type to be JSON, but you are actually using form posts :

 $.ajax({
  url: '/myapiendpoint',
  contentType: 'application/json'
  type: 'POST'
});

Don’t think that you are immune to this just because you are using a more modern library. Every HttpClient library for javascript will have some level of default Content Type (Typically application/json), and some way to override it. Often, libraries such as HttpClient in Angular, or Axios, have ways to globally override the content-type and override it per request, so it can take some time working out exactly how the front end is working.

When it comes down to it, you may need to use things like your browser dev tools to explicitly make sure that your front end library is sending the correct content-type. If it is, and you are certain that the issue doesn’t lie there, then we have to move to debugging the back end.

Checking The Consumes Attribute

If we are sure that our front end is sending data with a content-type we are expecting, then it must be something to do with our backend. The first thing I always check is if we are using the Consumes attribute. They look a bit like this :

[Consumes("application/xml")]
public class TestController : ControllerBase
{
}

Now in this example, I’ve placed the attribute on the Controller, but it can also be placed directly on an action, or even added to your application startup to apply globally, so your best bet is usually a “Ctrl + Shift + F” to find all of them.

If you are using this attribute, then make sure it matches what the front end is sending. In 99% of cases, you actually don’t need this attribute except for self documenting purposes, so if you can’t find this in use anywhere, that’s normal. Don’t go adding it if you don’t already have it and are running into this issue, because often that will just complicate matters.

In the above example, I used [Consumes(“application/xml)] as an example of what might break your API and return an error 415. If my front end has a content-type of json, and my consumes specifies I’m expecting XML, then it’s pretty clear there’s going to be a conflict of some kind we need to resolve.

Checking FromBody vs FromForm

Still not working? The next thing to check is if you are using FromBody vs FromForm correctly. Take this action for example :

public IActionResult MyAction([FromForm]object myObject)

This endpoint can only be called with non form post data. e.g. The content type must be “application/x-www-form-urlencoded”. Why? Because we are using the [FromForm] attribute.

Now if we change it to FromBody like so :

public IActionResult MyAction([FromBody]object myObject)

This can only accept “body” types of JSON, XML etc. e.g. Non form encoded content types. It’s really important to understand this difference because sometimes people change the Consumes attribute, without also changing how the content of the POST is read. This has happened numerous times for me, mostly when changing a JSON endpoint to just take form data because a particular library requires it.

ApiController Attribute

Finally, I want to talk about a particular attribute that might break an otherwise working API. In .NET Core and .NET 5+, there is an attribute you can add to any controller (Or globally) called “ApiController”. It adds certain conventions to your API, most notably it will check ModelState for you and return a nice error 400 when the ModelState is not valid.

However, I have seen API’s act very differently when it comes to modelbinding, because of this attribute. It adds some nice “conventions” for you that it will try and infer the FromBody, FromRoute, FromQuery etc for you. Generally speaking, I don’t see this breaking API’s, and for the most part, I use it everywhere. But if you are comparing two projects with the exact same controller and action setup, and one works and one doesn’t, it’s worth checking if one implements the ApiController attribute. Again, “Ctrl + Shift + F” is your friend here to find anywhere that it may be getting applied.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.