These days, end to end browser testing is a pretty standard practice amongst modern development teams. Whether that’s with Selenium, WebDriver.IO or Cypress, realistically as long as you are getting the tests up and running, you’re doing well.

Over the past couple of years, Cypress had become a defacto end to end testing framework. I don’t think in the last 5 years I’ve been at a company that hasn’t atleast given it a try and built out some proof of concepts. And look, I like Cypress, but after some time I started getting irritated with a few caveats (Many of which are listed by Cypress themselves here).

Notably :

  • The “same origin” URL limitation (Essentially you must be on the same root domain for the entire test) is infuriating when many web applications run some form of SSO/OAuth, even if using something like Auth0 or Azure AD B2C. So you’re almost dead in the water immediately.
  • Cypress does not handle multiple tabs
  • Cypress cannot run multiple browsers at the same time (So testing some form of two way communication between two browsers is impossible)
  • The “Promise” model and chaining of steps in a test seemed ludicrously unwieldy. And when trying to get more junior team members to write tests, things quickly entered into a “Pyramid of doom“.

As I’ll talk about later in another post, the biggest thing was that we wanted a simple model for users writing tests in Gherkin type BDD language. We just weren’t getting that with Cypress and while I’m sure people will tell me all the great things Cypress can do, I went out looking for an alternative.

I came across Playwright, a cross platform, cross browser automation testing tool that did exactly what it says on the tin with no extras. Given my list of issues above with Cypress, I did have to laugh that this is a very prominent quote on their homepage :

Multiple everything. Test scenarios that span multiple tabs, multiple origins and multiple users. Create scenarios with different contexts for different users and run them against your server, all in one test.

They definitely know which audience they are playing up to.

Playwright has support for tests written in NodeJS, Java, Python and of course, C# .NET. So let’s take a look at the latter and how much work it takes to get up and running.

For an example app, let’s assume that we are going to write a test that has the following test scenario :

Given I am on https://www.google.com
When I type dotnetcoretutorials.com into the search box
And I press the button with the text "Google Search"
Then the first result is domain dotnetcoretutorials.com

Obviously this is a terrible example of a test as the result might not always be the same! But I wanted to just show a little bit of a simple test to get things going.

Let’s get cracking on a C# test to execute this!

Now the thing with Playwright is, it’s actually just a C# library. There isn’t some magical tooling that you have to download or extensions to Visual Studio that you need to get everything working nicely. You can write everything as if you were writing a simple C# unit test.

For this example, let’s just create a simple MSTest project in Visual Studio. You can of course create a test project with NUnit, XUnit or any other testing framework you want and it’s all going to work much the same.

Next, let’s add the PlayWright nuget package with the following command in our Package Manager Console. Because we are using MSTest, let’s add the MSTest specific Nuget package as this has a few helpers that speed things up in the future (Realistically, you don’t actually need this and can install Microsoft.Playwright if you wish)

Install-Package Microsoft.Playwright.MSTest

Now here’s my test. I’m going to dump it all here and then walk through a little bit on how it works.

[TestClass]
public class MyUnitTests : PageTest
{
    [TestMethod]
    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()
    {
        //Given I am on https://www.google.com
        await Page.GotoAsync("https://www.google.com");
        //When I type dotnetcoretutorials.com into the search box
        await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");
        //And I press the button with the text "Google Search"
        await Page.ClickAsync("[value='Google Search'] >> nth=1");
        //Then the first result is domain dotnetcoretutorials.com
        var firstResult = await Page.Locator("//cite >> nth=0").InnerTextAsync();
        Assert.AreEqual("https://dotnetcoretutorials.com", firstResult);
    }
}

Here’s some things you may notice!

First, our unit test class inherits from “PageTest” like so :

public class MyUnitTests : PageTest

Why? Well because the PlayWright.MSTest package contains code to set up and tear down browser objects for us (And it also handles concurrent tests very nicely). If we didn’t use this package, either because we are using a different test framework or we want more control, the set up code would look something like :

IPage Page;
[TestInitialize]
public async Task TestInitialize()
{
    var playwright = await Playwright.CreateAsync();
    var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
    {
        Headless = false
    });
    Page = await browser.NewPageAsync();
}

So it’s not the end of the world, but it’s nice that the framework can handle it for us!

Next what you’ll notice is that there are no timeouts *and* all methods are async. By timeouts, what I mean is the bane of every selenium developers existence is “waiting” for things to show up on screen, especially in javascript heavy web apps.

For example, take these two calls one after the other :

//Given I am on https://www.google.com
await Page.GotoAsync("https://www.google.com");
//When I type dotnetcoretutorials.com into the search box
await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");

In other frameworks we might have to :

  • Add some sort of arbitrary delay after the GoTo call to wait for the page to properly load
  • Write some code to check if a particular element is on screen before continuing (Like a WaitUntil type call)
  • Write some custom code for our Fill method that will poll or retry until we can find that element and type

Instead, Playwright handles that all under the hood for you and assumes that when you want to fill a textbox, that eventually it’s going to show and so it will wait till it does. The fact that everything is async also means it’s non-blocking, which is great if you are using playwright locally since it’s not gonna freeze everything on your screen for seconds at a time!

The rest of the test should be pretty self explanatory, we are using some typical selectors to fill out the google search and find the top result, and our Assert is from our own framework. Playwright does come packaged with it’s own assertion framework, but you don’t have to use it if you don’t want to!

And.. That’s it!

There are some extremely nifty tools that come packaged with Playwright that I’m going to write about in the coming days, including the ability to wire up with Specflow for some BDD goodness. What I will say so far is that I like the fact that Playwright has hit the right balance between being an automation test framework *and* being able to do plain browser automation (For example to take a screenshot of a web page). Cypress clearly leans on the testing side, and Selenium I feel often doesn’t feel like a testing framework as much as it feels like a scripting framework that you can jam into your tests. So far, so good!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was recently asked by another developer on the difference between making a method virtual/override, and simply hiding the method using the new keyword in C#.

I gave him what I thought to be the best answer (For example, you can change the return type when using the “new” keyword), and yet while showing him examples I managed to bamboozle myself into learning something new after all these years.

Take the following code for instance, what will it print?

Parent childOverride = new ChildOverride();
childOverride.WhoAmI();
Parent childNew = new ChildNew();
childNew.WhoAmI();
class Parent
{
    public virtual void WhoAmI()
    {
        Console.WriteLine("Parent");
    }
}
class ChildOverride : Parent
{
    public override void WhoAmI()
    {
        Console.WriteLine("ChildOverride");
    }
}
class ChildNew : Parent
{
    public new void WhoAmI()
    {
        Console.WriteLine("ChildNew");
    }
}

At first glance, I assumed it would print the same thing either way. After all, I’m basically newing up the two different types, and in *both* cases I am casting it to the parent.

When casting like this, I like to tell junior developers that an object “Always remembers who it is”. That is, my ChildOverride can be cast to a Parent, or even an object, and it still remembers that it’s a ChildOverride.

So what does the above code actually print out?

ChildOverride
Parent

So our Override method remembered who it was, and therefore it’s “WhoAmI” method. But our ChildNew did not… Kinda.

Why you might ask? Well it actually is quite simple if you think about it.

When you use the override keyword, it’s overriding the base class and there is a sort of “linkage” between the two methods. That is, it’s known that the child class is an override of the base.

When you use the new keyword, you are saying that the two methods are in no way related. And that your new method *only* exists on the child class, not on the parent. There is no “linkage” between the two.

This is why when you cast to the parent class, the overridden method is known, and the “new” method is not.

With all that being said, in many many years of programming in C# I have seldomly used the new keyword to override methods like this. Not only is there very little reason to do so, but it breaks a core SOLID principle in the Liskov Principle : https://dotnetcoretutorials.com/2019/10/20/solid-in-c-liskov-principle/

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Here’s another one from the vault of “Huh, I guess I never thought I needed that until now”

Recently I was trying to write a unit test that required me to paste in some known JSON to validate against. Sure, I could load the JSON from a file, but I really don’t like File IO in unit tests. What it ended up looking like was something similar to :

var sample = "{\"PropertyA\":\"Value\"}";

Notice those really ugly backslashes in there trying to escape my quotes. I get this a lot when working with JSON or even HTML string literals and my main method for getting around it is loading into notepad with a quick find and replace.

Well, starting from C# 11, you can now do the following!

var sample = """
{"PropertyA":"Value"}
""";

Notice those (ugly) three quote marks at the start and end. That’s the new syntax for “Raw String Literals”. Essentially allowing you to mix in unescaped characters without having to start backslashing like a madman.

Also supported is multi line strings like so :

var sample = """
{
    "PropertyA" : "Value"
}
""";

While this feature is officially coming in C# 11 later this year, you can get a taste for it by adding the following to your csproj file.

<LangVersion>preview</LangVersion>

I would say that editor support is not all too great now. The very latest Visual Studio 2022 seems to handle it fine, however inside VS Code I did have some issues (But it still compiled just fine).

One final thing to note is about the absence of the “tick” ` character. When I first heard about this feature, I just assumed it would use the tick character as it’s pretty synonymous with multi line raw strings (atleast in my mind). So I will include the discussion from Microsoft about whether they should use the tick character or not here : https://github.com/dotnet/csharplang/blob/main/proposals/raw-string-literal.md#alternatives

With the final decision being

In keeping with C# history, I think " should continue to be the string literal delimiter

I’m less sure on that. I can’t say that three quote marks makes any more sense than a tick, especially when it comes to moving between languages so… We shall see if this lasts until the official release.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

User Secrets (Sometimes called Secret Manager) in .NET has been in there for quite some time now (I think since .NET Core 2.0). And I’ve always *hated* it. I felt like they encouraged developers to email/slack/teams individual passwords or even entire secret files to each other and call it secure. I also didn’t really see a reason why developers would have secrets locally that were not shared with the wider team. For that reason, a centralized secret storage such as Azure Keyvault was always preferable.

But over the past few months. I’ve grown to see their value… And in reality, I use User Secrets more for “this is how local development works on my machine”, rather than actual secrets. Let’s take a look at how User Secrets work and how they can be used, and then later on we can talk more about what I’ve been using them for.

Creating User Secrets via Visual Studio

By far the easiest way to use User Secrets is via Visual Studio. Right click your entry project and select “Manage User Secrets”.

Visual Studio will then work out the rest, installing any packages you require and setting up the secrets file! Easy!

You should be presented with an empty secrets file which we will talk about later.

Even if you use Visual Studio. I highly recommend at least reading the section below on how to do things from the command line. It will explain how things work behind the scenes and will likely explain away any questions you have about what Visual Studio is doing under the hood.

Creating User Secrets via Command Line

We can also create User Secrets via the command line! To do so, we need to run the following command in our project folder :

dotnet user-secrets init

The reality is, all this does is generate a guid and place it into your csproj file. It looks a bit like so :

<UserSecretsId>6272892f-ffcd-4039-b82a-b60874e91fce</UserSecretsId>

If you really wanted, you could generate this guid yourself and place it here, there is nothing special about it *except* that between projects on your machine, the guid must be unique. Of course, if you wanted projects to share secrets then you could of course use the same guid across projects.

From here, you can now set secrets from the command line. It seems janky, but unfortunately you *must* create a secret via the command line before you can edit the secrets file in a notepad. It seems annoying but.. That’s how it works. So in your project folder run the following command :

dotnet user-secrets set "MySecret" "12345"

So.. What does this actually do? It’s quite simple actually. On Windows, you will have the following file :

%APPDATA%\Microsoft\UserSecrets\{guid}\secrets.json

And on Linux :

~/.microsoft/usersecrets/{guid}/secrets.json

Opening this file, you’ll see something like :

{
    "MySecret" : "12345"
}

And from this point on you can actually edit this file in notepad and forget the command line all together. In reality, you could also even create this file manually and never use the command line to add the initial secret as well. But I just wanted to make note that the file *does not* exist until you add your first secret. And, as we will see later, if you have a user secret guid in your csproj file, but you don’t have the corresponding file, you’ll actually throw errors which is a bit frustrating.

With all of this, when you use Visual Studio, it essentially does all of this for you. But I still think it’s worth understanding where these secrets get stored, and how it’s just a local file on your machine. No magic!

Using User Secrets In .NET Configuration

User Secrets follow the same paradigm as all other configuration in .NET. So if you are using an appsettings.json, Azure Keyvault, Environment Variables etc. It’s all the same, even with User Secrets.

If you installed via the Command Line, or you just want to make sure you have the right packages, you will need to install the following nuget package :

Install-Package  Microsoft.Extensions.Configuration.UserSecrets

The next step is going to depend if you are using .NET 6 minimal API’s or .NET 5 style Startup classes. Either way, you probably by now understand where you are adding your configuration to your project.

For example, in my .NET 6 minimal API I have something that looks like so :

builder.Configuration.AddEnvironmentVariables()
                     .AddKeyVault()
                     .AddUserSecrets(Assembly.GetExecutingAssembly(), true);

Notice I’m passing “true” as the second variable for UserSecrets. That’s because in .NET 6, User Secrets were made “required” by default and by passing true, we make them optional. This is important as if users have not set up the user secret file on their machine yet, this whole thing will blow up if not made optional. The exception will be something like :

System.IO.FileNotFoundException: The configuration file 'secrets.json' was not found and is not optional

Now, our User Secrets are being loaded into our configuration object. Ideally, we should place User Secrets *last* in our configuration pipeline because it means they will be the last overwrite to happen. And… That’s it! Pretty simple. But what sort of things will we put in User Secrets?

What Are User Secrets Good For?

I think contrary to the name, User Secrets are not good for Secrets at all, but instead user specific configuration. Let me give you an example. On a console application I was working with, all but one developer were using Windows machines. This worked great because we had local file path configuration, and this obviously worked smoothly on Windows. However, the Linux user was having issues. Originally, the developer would download the project, edit the appsettings, and run the project fine. When it came time to check in work, they would have to quickly revert or ignore the changes in appsettings so that they didn’t get pushed up. Of course, this didn’t always happen and while it was typically caught in code review, it did cause another round of branch switching, and changes to be pushed.

Now we take that same example and put User Secrets over the top. Now the Linux developer simply edits their User Secrets to change the file paths to suit their machine. They never touch appsettings.json at all, and everything works just perfectly.

Take another team I work with. They had in the past worked with a shared remote database in Azure for local development. This was causing all sorts of headaches when developers were writing or testing SQL migrations. Often their migrations would break other developers. Again, to not break the existing developers flow, I created User Secrets and showed the team how they could override the default SQL connection string to instead use their local development machine so we could slowly ween ourselves away from using a shared database.

Another example on a similar vein. The amount of times I’ve had developers install SQL server on their machine either as /SQLExpress or /MSSQLSERVER rather than a non-named instance. It happens all the time. Again, while I’m trying to help these developers out, sometimes it’s easier to just say, please add a user secret for your specific set up if you need it and we can resolve the issue later. It almost becomes an unblocking mechanism that developers can actually control their own configuration.

What I don’t think User Secrets are good for are actual secrets. So for example, while creating an emailing integration, a developer put a Sendgrid API key in their User Secrets. But what happens when he pushes that code up? Is he just going to email that secret to developers that need it? It doesn’t really make sense. So anything that needs to be shared, should not be in User Secrets at all.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Imagine an Ecommerce system that generates a unique order number each time a customer goes to the checkout. How would you generate this unique number?

You could :

  • Use the primary key from the database
  • Select the MAX order number from the database and increment by one
  • Write a custom piece of code that uses a table with a unique constraint to “reserve” numbers
  • Use SQL Sequences to generate unique values

As you’ve probably guessed by the title of this post, I want to touch on that final point because it’s a SQL Server feature that I think has gone a bit under the radar. I’ll be the first to admit that it doesn’t solve all your problems (See limitations at the end of this post), but you should know it exists and what it’s good for. Half the battle when choosing a solution is just knowing what’s out there after all!

SQL Sequences are actually a very simple and effective way to generate unique incrementing values in a threadsafe way. That means as your application scales, you don’t have to worry about two users clicking the “order” button on your ecommerce site at exactly the same time, and being given the exact same order number.

Getting Started With SQL Sequences

Creating a Sequence in SQL Server is actually very simple.

CREATE SEQUENCE TestSequence
START WITH 1
INCREMENT BY 1

Given this syntax, it’s probably obvious to you the sort of different options you can do. You can for example, always increment by the sequence by 2 :

CREATE SEQUENCE TestSequence
START WITH 1
INCREMENT BY 2

Or you can even descend instead of ascend :

CREATE SEQUENCE TestSequence
START WITH 0
INCREMENT BY -1

And to get the next value, we just need to run SQL like :

SELECT NEXT VALUE FOR TestSequence

It really is that simple! Not only that, you can view Sequences in SQL Management Studio as well (Including being able to create them, view the next value without actually requesting it etc). Simply look for the Sequences folder under Programmability.

Entity Framework Support

Of course you are probably using Entity Framework with .NET/SQL Server, so what about first class support there? Well.. It is supported but it’s not great.

To recreate our sequence as above, we would override the OnModelCreating of our DbContext (e.g. Where we would put all of our configuration anyway). And add the following :

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.HasSequence("TestSequence", x => x.StartsAt(1).IncrementsBy(1));
}

That creates our sequence, but how about using it? Unfortunately, there isn’t really a thing to “get” the next value (For example if you needed it in application code). Most of the documentation revolves around using it as a default value for a column such as :

modelBuilder.Entity<Order>()
    .Property(o => o.OrderNo)
    .HasDefaultValueSql("NEXT VALUE FOR TestSequence");

If you are looking to simply retrieve the next number in the sequence and use it somewhere else in your application, unfortunately you will be writing raw SQL to achieve that. So not ideal.

With all of that being said however, if you use Entity Framework migrations as the primary way to manage your database, then the ability to at least create sequences via the ModelBuilder is still very very valuable.

Limitations

When it comes to generating unique values for applications, my usage of SQL Sequences has actually been maybe about 50/50. 50% of the time it’s the perfect fit, but 50% of the time there are some heavy “limitations” that get in the way of it being actually useful.

Some of these limitations that I’ve run into include :

When you request a number from a sequence, no matter what happens from that point on, the sequence is incremented. Why this is important is imagine you are creating a Customer in the database, and you request a number from the sequence and get “156”. When you go to insert that Customer, a database constraint fails and the customer is not inserted. The next Customer will still be inserted as “157”, regardless of the previous insert failing. In short, sequences are not part of transactions and do not “lock and release” numbers. This is important because in some cases, you may not wish to have a “gap” in the sequence at all.

Another issue is that sequences cannot be “partitioned” in any way. A good example is a system I was building required unique numbers *per year*. And each year, the sequence would be reset. Unfortunately, orders could be backdated and therefore simply waiting until Jan 1st and resetting the sequence was not possible. What would be required is a sequence created for say the next 10 years, and for each of these be managed independently. It’s not too much of a headache, but it’s still another bit of overhead.

In a similar vein, multi tenancy can make sequences useless. If you have a single database in an ecommerce SAAS product supporting say 100 tenants. You cannot use a single sequence for all of them. You would need to create multiple sequences (One for each tenant), which again, can be a headache.

In short, sequences are good when you need a single number incrementing for your one tenant database. Anything more than that and you’re going to have to go with something custom or deal with managing several sequences at once, and selecting the right one at the right time with business logic.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post, we talked about how we could soft delete entities by setting up a DateDeleted column (Read that post here : https://dotnetcoretutorials.com/2022/03/16/auto-updating-created-updated-and-deleted-timestamps-in-entity-framework/) But if you’ve ever done this (Or used a simple “IsDeleted” flag), you’ll know that it becomes a bit of a burden to always have the first line of your query go something like this :

dbSet.Where(x => x.DateDeleted == null);

Essentially, you need to remember to always be filtering out rows which have a DateDeleted. Annoying!

Microsoft have a great way to solve this with what’s called “Global Query Filters”. And the documentation even provides an example for how to ignore soft deletes in your code : https://docs.microsoft.com/en-us/ef/core/querying/filters

The problem with this is that it only gives examples on how to do this for each entity, one at a time. If your database has 30 tables, all with a DateDeleted flag, you’re going to have to remember to add the configuration each and every time.

In previous versions of Entity Framework, we could get around this by using “Conventions”. Conventions were a way to apply configuration to a broad set of Entities based on.. well.. conventions. So for example, you could say “If you see an IsDeleted boolean field on an entity, we always want to add a filter for that”. Unfortunately, EF Core does not have conventions (But it may land in EF Core 7). So instead, we have to do things a bit of a rinky dink way.

To do so, we just need to override the OnModelCreating to handle a bit of extra code (Of course we can extract this out to helper methods, but for simplicity I’m showing where it goes in our DBContext).

public class MyContext: DbContext
{
	protected override void OnModelCreating(ModelBuilder modelBuilder)
	{
		foreach (var entityType in modelBuilder.Model.GetEntityTypes())
		{ 
			//If the actual entity is an auditable type. 
			if(typeof(Auditable).IsAssignableFrom(entityType.ClrType))
			{
				//This adds (In a reflection type way), a Global Query Filter
				//https://docs.microsoft.com/en-us/ef/core/querying/filters
				//That always excludes deleted items. You can opt out by using dbSet.IgnoreQueryFilters()
				var parameter = Expression.Parameter(entityType.ClrType, "p");
				var deletedCheck = Expression.Lambda(Expression.Equal(Expression.Property(parameter, "DateDeleted"), Expression.Constant(null)), parameter);
				modelBuilder.Entity(entityType.ClrType).HasQueryFilter(deletedCheck);
			}
		}
		
		base.OnModelCreating(modelBuilder);
	}
}

What does this do?

Of course, we can use this same loop to add other “Conventions” too. Things like adding an Index to the DateDeleted field is possible via the OnModelCreating override.

Now, whenever we query the database, Entity Framework will automatically filter our soft deleted entities for us!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In any database schema, it’s extremely common to have the fields “DateCreated, DateUpdated and DateDeleted” on almost every entity. At the very least, they provide helpful debugging information, but further, the DateDeleted affords a way to “soft delete” entities without actually deleting them.

That being said, over the years I’ve seen some pretty interesting ways in which these have been implemented. The worst, in my view, is writing C# code that specifically updates the timestamp when created or updated. While simple, one clumsy developer later and you aren’t recording any timestamps at all. It’s very prone to “remembering” that you have to update the timestamp. Other times, I’ve seen database triggers used which.. works.. But then you have another problem in that you’re using database triggers!

There’s a fairly simple method I’ve been using for years and it involves utilizing the ability to override the save behaviour of Entity Framework.

Auditable Base Model

The first thing we want to do is actually define a “base model” that all entities can inherit from. In my case, I use a base class called “Auditable” that looks like so :

public abstract class Auditable
{
	public DateTimeOffset DateCreated { get; set; }
	public DateTimeOffset? DateUpdated { get; set; }
	public DateTimeOffset? DateDeleted { get; set; }
}

And a couple of notes here :

  • It’s an abstract class because it should only ever be inherited from
  • We use DateTimeOffset because we will then store the timezone along with the timestamp. This is a personal preference but it just removes all ambiguity around “Is this UTC?”
  • DateCreated is not null (Since anything created will have a timestamp), but the other two dates are! Note that if this is an existing database, you will need to allow nullables (And work out a migration strategy) as your existing records will not have a DateCreated.

To use the class, we just need to inherit from it with any Entity Framework model. For example, let’s say we have a Customer object :

public class Customer : Auditable
{
	public int Id { get; set; }
	public string Name { get; set; }
}

So all the class has done is mean we don’t have to copy and paste the same 3 date fields everywhere, and that it’s enforced. Nice and simple!

Overriding Context SaveChanges

The next thing is maybe controversial, and I know there’s a few different ways to do this. Essentially we are looking for a way to say to Entity Framework “Hey, if you insert a new record, can you set the DateCreated please?”. There’s things like Entity Framework hooks and a few nuget packages that do similar things, but I’ve found the absolute easiest way is to simply override the save method of your database context.

The full code looks something like :

public class MyContext: DbContext
{
	public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = default)
	{
		var insertedEntries = this.ChangeTracker.Entries()
							   .Where(x => x.State == EntityState.Added)
							   .Select(x => x.Entity);
		foreach(var insertedEntry in insertedEntries)
		{
			var auditableEntity = insertedEntry as Auditable;
			//If the inserted object is an Auditable. 
			if(auditableEntity != null)
			{
				auditableEntity.DateCreated = DateTimeOffset.UtcNow;
			}
		}
		var modifiedEntries = this.ChangeTracker.Entries()
				   .Where(x => x.State == EntityState.Modified)
				   .Select(x => x.Entity);
		foreach (var modifiedEntry in modifiedEntries)
		{
			//If the inserted object is an Auditable. 
			var auditableEntity = modifiedEntry as Auditable;
			if (auditableEntity != null)
			{
				auditableEntity.DateUpdated = DateTimeOffset.UtcNow;
			}
		}
		return base.SaveChangesAsync(cancellationToken);
	}
}

Now you’re context may have additional code, but this is the bare minimum to get things working. What this does is :

  • Gets all entities that are being inserted, checks if they inherit from auditable, and if so set the Date Created.
  • Gets all entities that are being updated, checks if they inherit from auditable, and is so set the Date Updated.
  • Finally, call the base SaveChanges method that actually does the saving.

Using this, we are essentially intercepting when Entity Framework would normally save all changes, and updating all timestamps at once with whatever is in the batch.

Handling Soft Deletes

Deletes are a special case for one big reason. If we actually try and call delete on an entity in Entity Framework, it gets added to the ChangeTracker as… well… a delete. And to unwind this at the point of saving and change it to an update would be complex.

What I tend to do instead is on my BaseRepository (Because.. You’re using one of those right?), I check if an entity is Auditable and if so, do an update instead. The copy and paste from my BaseRepository looks like so :

public async Task<T> Delete(T entity)
{
	//If the type we are trying to delete is auditable, then we don't actually delete it but instead set it to be updated with a delete date. 
	if (typeof(Auditable).IsAssignableFrom(typeof(T)))
	{
		(entity as Auditable).DateDeleted = DateTimeOffset.UtcNow;
		_dbSet.Attach(entity);
		_context.Entry(entity).State = EntityState.Modified;
	}
	else
	{
		_dbSet.Remove(entity);
	}
	return entity;
}

Now your mileage may vary, especially if you are not using the Repository Pattern (Which you should be!). But in short, you must handle soft deletes as updates *instead* of simply calling Remove on the DbSet.

Taking This Further

What’s not shown here is that we can use this same methodology to update many other “automated” fields. We use this same system to track the last user to Create, Update and Delete entities. Once this is up and running, it’s often just a couple more lines to instantly gain traceability across every entity in your database!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Such is life on Twitter, I’ve been watching from afar .NET developers argue about a particular upcoming C# 11 feature, Parameter Null Checks. It’s actually just a bit of syntactic sugar to make it easier to throw argument null exceptions, but it’s caused a bit of a stir for two main reasons.

  1. People don’t like the syntax full stop. Which I understand, but other features such as some of the switch statement pattern matching and tuples look far worse! So in for a penny in for a pound!
  2. It somewhat clashes with another recent C# feature of Nullable Reference Types (We’ll talk more about this later).

The Problem

First let’s look at the problem this is trying to solve.

I may have a very simple method that takes a list of strings (As an example, but it could be any nullable type). I may want to ensure that whatever the method is given is not null. So we would typically write something like :

void MyMethod(List<string> input)
{
    if(input == null)
    {
        throw new ArgumentNullException(nameof(input));
    }
}

Nothing too amazing here. If the list is null, throw an ArgumentNullException!

In .NET 6 (Specifically .NET 6, not a specific version of C#), a short hand was added to save a few lines. So we could now do :

void MyMethod(List input)
{
    ArgumentNullException.ThrowIfNull(input);
}

There is no magic here. It’s just doing the same thing we did before with the null check, but wrapping it all up into a nice helper.

So what’s the problem? Well.. There isn’t one really. The only real issue is that should you have a method with many parameters, and all of them nullable, and yet you want to throw a ArgumentNullException, you might have an additional few lines at the start of your method. I guess that’s a problem to be solved, but it isn’t too much of a biggie.

Parameter Null Checking In C# 11

I put C# 11 here, but actually you can turn on this feature in C# 10 by adding the following to your csproj file :

<EnablePreviewFeatures>True</EnablePreviewFeatures>

Now we have a bit of sugar around null check by doing the following :

void MyMethod(List<string> input!!)
{
}

Adding the “!!” operator to a parameter name immediately adds an argument null check to it, skipping the need for the first few lines to be boilerplate null checks.

Just my personal opinion, it’s not… that bad. I think people see the use of symbols, such as ? or ! and they immediately get turned off. When using a symbol like this, especially one that isn’t universal across different languages (such as a ternary ?), it’s not immediately clear what it does. I’ve even seen some suggest just adding another keyword such as :

void MyMethod(notnull List<string> input)
{
}

I don’t think this is really any better to be honest.

Overall, it’s likely to see a little bit of use. But the interesting context of some of the arguments against this is….

Nullable Reference Types

For why I am totally wrong in all of the below, check this great comment from David. It explains why, while the below is true, it’s also not the full story and I am wrong in suggesting that you only need one or the other!

C#8 introduced the concept of Nullable Reference Types. Before this, all reference types were nullable by default, and so the above checks were essentially required. C#8 came along and gave a flag to say, if I want something to be nullable, I’ll let you know, otherwise treat everything as non nullable. You can read more about the feature here : https://dotnetcoretutorials.com/2018/12/19/nullable-reference-types-in-c-8/

The interesting point here is that if I switch this flag on (And from .NET 6, it’s switched on by default in new projects), then there is no need for ArgumentNullExceptions because either the parameter is not null by default, or I specify that it can be null (And therefore won’t need the check).

Just as an example, with Nullables switched on using code :

#nullable enable
void MyMethod(List<string> input)
{
    //Input cannot be null anyway. So no need for the check. 
}

void MyMethod2(List<string>? input)
{
    //Using ? I've specified it can be null, and if I'm saying it can be null...
    //I won't be throwing exceptions when it is null right? 
}

There’s arguments that nullable reference types are a compile time check whereas throwing an exception is a runtime check. But the reality is they actually solve the same problem just in different ways, and if there is a push to do things one way (nullable reference types), then there’s no need for the other.

With all of that being said. Honestly, it’s a nice feature and I’m really not that fussed over it. The extent of my thinking is that it’s a handy little helper. That’s all.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

A number of times in recent years, I’ve had the chance to work in companies that completely design out entire API’s using OpenAPI, before writing a single line of code. Essentially writing YAML to say which endpoints will be available, and what each API should accept and return.

There’s pros and cons to doing this of course. A big pro is that by putting in upfront time to really thinking about API structure, we can often uncover issues well before we get half way through a build. But a con is that after spending a bunch of time defining things like models and endpoints in YAML, we then need to spend days doing nothing but creating C# classes as clones of their YAML counterparts which can be tiresome and frankly, demoralizing at times.

That’s when I came across Open API Generator : https://openapi-generator.tech/

It’s a tool to take your API definitions, and scaffold out APIs and Clients without you having to lift a finger. It’s surprisingly configurable, but at the same time it isn’t too opinionated and allows you to do just the basics of turning your definition into controllers and models, and nothing more.

Let’s take a look at a few examples!

Installing Open API Generator

If you read the documentation here https://github.com/OpenAPITools/openapi-generator, it would look like installing is a huge ordeal of XML files, Maven and JAR files. But for me, using NPM seemed to be simple enough. Assuming you have NPM installed already (Which you should!), then you can simply run :

npm install @openapitools/openapi-generator-cli -g

And that’s it! Now from a command line you can run things like :

openapi-generator-cli version

Scaffolding An API

For this example, I actually took the PetStore API available here : https://editor.swagger.io/

It’s just a simple YAML definition that has CRUD operations on an example API for a pet store. I took this YAML and stored it as “petstore.yaml” locally. Then I ran the following command in the same folder  :

openapi-generator-cli generate -i petstore.yaml -g aspnetcore -o PetStore.Web --package-name PetStore.Web

Pretty self explanatory but one thing I do want to point out is the -g flag. I’m passing in aspnetcore here but in reality, Open API Generator has support to generate API’s for things like PHP, Ruby, Python etc. It’s not C# specific at all!

Our project is generated and overall, it looks just like any other API you would build in .NET

Notice that for each group of API’s in our definition, it’s generated a controller each and models as well.

The controllers themselves are well decorated, but are otherwise empty. For example here is the AddPet method :

/// <summary>
/// Add a new pet to the store
/// </summary>
/// <param name="body">Pet object that needs to be added to the store</param>
/// <response code="405">Invalid input</response>
[HttpPost]
[Route("/v2/pet")]
[Consumes("application/json", "application/xml")]
[ValidateModelState]
[SwaggerOperation("AddPet")]
public virtual IActionResult AddPet([FromBody]Pet body)
{
    //TODO: Uncomment the next line to return response 405 or use other options such as return this.NotFound(), return this.BadRequest(..), ...
    // return StatusCode(405);
    throw new NotImplementedException();
}

I would note that this is obviously rather verbose (With the comments, Consumes attribute etc), but a lot of that is because that’s what we decorated our OpenAPI definition with, therefore it tries to generate a controller that should act and function identically.

But also notice that it hasn’t generated a service or data layer. It’s just the controller and the very basics of how data gets in and out of the API. It means you can basically scaffold things and away you go.

The models themselves also get generated, but they can be rather verbose. For example, each model gets an override of the ToString method that looks a bit like so :

/// <summary>
/// Returns the string presentation of the object
/// </summary>
/// <returns>String presentation of the object</returns>
public override string ToString()
{
    var sb = new StringBuilder();
    sb.Append("class Pet {\n");
    sb.Append("  Id: ").Append(Id).Append("\n");
    sb.Append("  Category: ").Append(Category).Append("\n");
    sb.Append("  Name: ").Append(Name).Append("\n");
    sb.Append("  PhotoUrls: ").Append(PhotoUrls).Append("\n");
    sb.Append("  Tags: ").Append(Tags).Append("\n");
    sb.Append("  Status: ").Append(Status).Append("\n");
    sb.Append("}\n");
    return sb.ToString();
}

It’s probably overkill, but you can always delete it if you don’t like it.

Obviously there isn’t much more to say about the process. One command and you’ve got yourself a great starting point for an API. I would like to say that you should definitely dig into the docs for the generator as there is actually a tonne of flags to use that likely solve a lot of hangups you might have about what it generates for you. For example there is a flag to use NewtonSoft.JSON instead of System.Text.Json if that is your preference!

I do want to touch on a few pros and cons on using a generator like this though…

The first con is that updates to the original Open API definition really can’t be “re-generated” into the API. There are ways to do it using the tool but in reality, I find it unlikely that you would do it like this. So for the most part, the generation of the API is going to be a one time thing.

Another con is as I’ve already pointed out, the generator has it’s own style which may or may not suit the way you like to develop software. On larger API’s fixing some of these quirks of the generator can be annoying. But I would say that for the most part, fixing any small style issues is still likely to take less time than writing the entire API from scratch by hand.

Overall however, the pro of this is that you have a very consistent style. For example, I was helping out a professional services company with some of their code practices recently. What I noticed is that they spun up new API’s every month for different customers. Each API was somewhat beholden to the tech leads style and preferences. By using an API generator as a starting point, it meant that everyone had a somewhat similar starting point for the style we wanted to go for, and the style that we should use going forward.

Generating API Clients

I want to quickly touch on another functionality of the Open API Generator, and that is generating clients for an API. For example, if you have a C# service that needs to call out to a web service, how can you quickly whip up a client to interact with that API?

We can use the following command to generate a Client library project :

openapi-generator-cli generate -i petstore.yaml -g csharp -o PetStore.Client --package-name PetStore.Client

This generates a very simple PetApi interface/class that has all of our methods to call the API.

For example, take a look at this simple code :

var petApi  = new PetApi("https://myapi.com");
var myPet = petApi.GetPetById(123);
myPet.Name = "John Smith";
petApi.UpdatePet(myPet);

Unlike the server code we generated, I find that the client itself is often able to be regenerated as many times as needed, and over long periods of time too.

As I mentioned, the client code is very handy when two services need to talk to each other, but I’ve also found it useful for writing large scale integration tests without having to copy and paste large models between projects or be mindful about what has changed in an API, and copy those changes over to my test project.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I cannot tell you how many times I’ve had the following conversation

“Hey I’m getting an error”

“What’s the error?”

“DBUpdateException”

“OK, what’s the message though, that could be anything”

“ahhh.. I didn’t see…..”

Frustratingly, When doing almost anything with Entity Framework including updates, deletes and inserts, if something goes wrong you’ll be left with the generic exception of :

Microsoft.EntityFrameworkCore.DbUpdateException: ‘An error occurred while saving the entity changes. See the inner exception for details.’

It can be extremely annoying if you’re wanting to catch a particular database exception (e.g. It’s to be expected that duplicates might be inserted), and handle them in a different way than something like being unable to connect to the database full stop. Let’s work up a quick example to illustrate what I mean.

Let’s assume I have a simple database model like so :

class BlogPost
{
    public int Id { get; set; }
    public string PostName { get; set; }
}

And I have configured my entity to have a unique constaint meaning that every BlogPost must have a unique name :

modelBuilder.Entity<BlogPost>()
    .HasIndex(x => x.PostName)
    .IsUnique();

If I do something as simple as :

context.Add(new BlogPost
{
    PostName = "Post 1"
});
context.Add(new BlogPost
{
    PostName = "Post 1"
});
context.SaveChanges();

The *full* exception would be along the lines of :

Microsoft.EntityFrameworkCore.DbUpdateException: ‘An error occurred while saving the entity changes. See the inner exception for details.’
Inner Exception
SqlException: Cannot insert duplicate key row in object ‘dbo.BlogPosts’ with unique index ‘IX_BlogPosts_PostName’. The duplicate key value is (Post 1).

Let’s say that we want to handle this exception in a very specific way, for us to do this we would have to have a bit of a messy try/catch statement :

try
{
    context.SaveChanges();
}catch(DbUpdateException exception) when (exception?.InnerException?.Message.Contains("Cannot insert duplicate key row in object") ?? false)
{
    //We know that the actual exception was a duplicate key row
}

Very ugly and there isn’t much reusability here. If we want to catch a similar exception elsewhere in our code, we’re going to be copy and pasting this long catch statement everywhere.

And that’s where I came across the EntityFrameworkCore.Exceptions library!

Using EntityFrameworkCore.Exceptions

The EntityFrameworkCore.Exceptions library is extremely easy to use and I’m actually somewhat surprised that it hasn’t made it’s way into the core EntityFramework libraries already.

To use it, all we have to do is run the following on our Package Manager Console :

Install-Package EntityFrameworkCore.Exceptions.SqlServer

And note that there are packages for things like Postgres and MySQL if that’s your thing!

Then with a single line for our DB Context we can set up better error handling :

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseExceptionProcessor();
}

If we run our example code from above, instead of our generic DbUpdateException we get :

EntityFramework.Exceptions.Common.UniqueConstraintException: ‘Unique constraint violation’

Meaning we can change our Try/Catch to be :

try
{
    context.SaveChanges();
}catch(UniqueConstraintException ex)
{
    //We know that the actual exception was a duplicate key row
}

Much cleaner, much tidier, and far more reusable!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.