These days, end to end browser testing is a pretty standard practice amongst modern development teams. Whether that’s with Selenium, WebDriver.IO or Cypress, realistically as long as you are getting the tests up and running, you’re doing well.

Over the past couple of years, Cypress had become a defacto end to end testing framework. I don’t think in the last 5 years I’ve been at a company that hasn’t atleast given it a try and built out some proof of concepts. And look, I like Cypress, but after some time I started getting irritated with a few caveats (Many of which are listed by Cypress themselves here).

Notably :

  • The “same origin” URL limitation (Essentially you must be on the same root domain for the entire test) is infuriating when many web applications run some form of SSO/OAuth, even if using something like Auth0 or Azure AD B2C. So you’re almost dead in the water immediately.
  • Cypress does not handle multiple tabs
  • Cypress cannot run multiple browsers at the same time (So testing some form of two way communication between two browsers is impossible)
  • The “Promise” model and chaining of steps in a test seemed ludicrously unwieldy. And when trying to get more junior team members to write tests, things quickly entered into a “Pyramid of doom“.

As I’ll talk about later in another post, the biggest thing was that we wanted a simple model for users writing tests in Gherkin type BDD language. We just weren’t getting that with Cypress and while I’m sure people will tell me all the great things Cypress can do, I went out looking for an alternative.

I came across Playwright, a cross platform, cross browser automation testing tool that did exactly what it says on the tin with no extras. Given my list of issues above with Cypress, I did have to laugh that this is a very prominent quote on their homepage :

Multiple everything. Test scenarios that span multiple tabs, multiple origins and multiple users. Create scenarios with different contexts for different users and run them against your server, all in one test.

They definitely know which audience they are playing up to.

Playwright has support for tests written in NodeJS, Java, Python and of course, C# .NET. So let’s take a look at the latter and how much work it takes to get up and running.

For an example app, let’s assume that we are going to write a test that has the following test scenario :

Given I am on
When I type into the search box
And I press the button with the text "Google Search"
Then the first result is domain

Obviously this is a terrible example of a test as the result might not always be the same! But I wanted to just show a little bit of a simple test to get things going.

Let’s get cracking on a C# test to execute this!

Now the thing with Playwright is, it’s actually just a C# library. There isn’t some magical tooling that you have to download or extensions to Visual Studio that you need to get everything working nicely. You can write everything as if you were writing a simple C# unit test.

For this example, let’s just create a simple MSTest project in Visual Studio. You can of course create a test project with NUnit, XUnit or any other testing framework you want and it’s all going to work much the same.

Next, let’s add the PlayWright nuget package with the following command in our Package Manager Console. Because we are using MSTest, let’s add the MSTest specific Nuget package as this has a few helpers that speed things up in the future (Realistically, you don’t actually need this and can install Microsoft.Playwright if you wish)

Install-Package Microsoft.Playwright.MSTest

Now here’s my test. I’m going to dump it all here and then walk through a little bit on how it works.

public class MyUnitTests : PageTest
    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()
        //Given I am on
        await Page.GotoAsync("");
        //When I type into the search box
        await Page.FillAsync("[title='Search']", "");
        //And I press the button with the text "Google Search"
        await Page.ClickAsync("[value='Google Search'] >> nth=1");
        //Then the first result is domain
        var firstResult = await Page.Locator("//cite >> nth=0").InnerTextAsync();
        Assert.AreEqual("", firstResult);

Here’s some things you may notice!

First, our unit test class inherits from “PageTest” like so :

public class MyUnitTests : PageTest

Why? Well because the PlayWright.MSTest package contains code to set up and tear down browser objects for us (And it also handles concurrent tests very nicely). If we didn’t use this package, either because we are using a different test framework or we want more control, the set up code would look something like :

IPage Page;
public async Task TestInitialize()
    var playwright = await Playwright.CreateAsync();
    var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
        Headless = false
    Page = await browser.NewPageAsync();

So it’s not the end of the world, but it’s nice that the framework can handle it for us!

Next what you’ll notice is that there are no timeouts *and* all methods are async. By timeouts, what I mean is the bane of every selenium developers existence is “waiting” for things to show up on screen, especially in javascript heavy web apps.

For example, take these two calls one after the other :

//Given I am on
await Page.GotoAsync("");
//When I type into the search box
await Page.FillAsync("[title='Search']", "");

In other frameworks we might have to :

  • Add some sort of arbitrary delay after the GoTo call to wait for the page to properly load
  • Write some code to check if a particular element is on screen before continuing (Like a WaitUntil type call)
  • Write some custom code for our Fill method that will poll or retry until we can find that element and type

Instead, Playwright handles that all under the hood for you and assumes that when you want to fill a textbox, that eventually it’s going to show and so it will wait till it does. The fact that everything is async also means it’s non-blocking, which is great if you are using playwright locally since it’s not gonna freeze everything on your screen for seconds at a time!

The rest of the test should be pretty self explanatory, we are using some typical selectors to fill out the google search and find the top result, and our Assert is from our own framework. Playwright does come packaged with it’s own assertion framework, but you don’t have to use it if you don’t want to!

And.. That’s it!

There are some extremely nifty tools that come packaged with Playwright that I’m going to write about in the coming days, including the ability to wire up with Specflow for some BDD goodness. What I will say so far is that I like the fact that Playwright has hit the right balance between being an automation test framework *and* being able to do plain browser automation (For example to take a screenshot of a web page). Cypress clearly leans on the testing side, and Selenium I feel often doesn’t feel like a testing framework as much as it feels like a scripting framework that you can jam into your tests. So far, so good!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Visual Studio 2022 17.2 shipped the other day, and in it was a handy little feature that I can definitely see myself using a lot going forward. That is the IEnumerable Visualizer! But before I dig into what it does (And really it’s quite simple), I wanted to quickly talk about why it was so desperately needed.

Let’s imagine I have a class like so :

class Person
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

And somewhere in my code, I have a List of people with a breakpoint on it. Essentially, I want to quickly check the contents of this list and make sure while debugging that I have the right data in the right place.

Our first port of call might be to simply is do the “Hover Results View” method like so :

But… As we can see it doesn’t exactly help us to view the contents easily. We can either then go and open up each item individually, or in some cases we can override the ToString method. Neither of which may be preferable or even possible depending on our situation.

We can of course use the “Immediate Window” to run queries against our list if we know we need to find something in particular. Something like :

? people.Where(x => x.FirstName == "John")

Again, it’s very adhoc and doesn’t give us a great view of the data itself, just whether something exists or not.

Next, we can use the Autos/Watch/Locals menu which does have some nice pinning features now, but again, is a tree view and so it’s hard to scroll through large pieces of data easily. Especially if we are trying to compare multiple properties at once.

But now (Again, you require Visual Studio 2022 17.2), notice how in the Autos view we have a little icon called “View” right at the top of the list there. Click that and…

This is the new IEnumerable visualizer! A nice tabular view of the data, that you can even export to excel if you really need to. While it’s a simple addition and really barebones, it’s something that will see immediate use in being able to debug your collections more accurately.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was recently asked by another developer on the difference between making a method virtual/override, and simply hiding the method using the new keyword in C#.

I gave him what I thought to be the best answer (For example, you can change the return type when using the “new” keyword), and yet while showing him examples I managed to bamboozle myself into learning something new after all these years.

Take the following code for instance, what will it print?

Parent childOverride = new ChildOverride();
Parent childNew = new ChildNew();
class Parent
    public virtual void WhoAmI()
class ChildOverride : Parent
    public override void WhoAmI()
class ChildNew : Parent
    public new void WhoAmI()

At first glance, I assumed it would print the same thing either way. After all, I’m basically newing up the two different types, and in *both* cases I am casting it to the parent.

When casting like this, I like to tell junior developers that an object “Always remembers who it is”. That is, my ChildOverride can be cast to a Parent, or even an object, and it still remembers that it’s a ChildOverride.

So what does the above code actually print out?


So our Override method remembered who it was, and therefore it’s “WhoAmI” method. But our ChildNew did not… Kinda.

Why you might ask? Well it actually is quite simple if you think about it.

When you use the override keyword, it’s overriding the base class and there is a sort of “linkage” between the two methods. That is, it’s known that the child class is an override of the base.

When you use the new keyword, you are saying that the two methods are in no way related. And that your new method *only* exists on the child class, not on the parent. There is no “linkage” between the two.

This is why when you cast to the parent class, the overridden method is known, and the “new” method is not.

With all that being said, in many many years of programming in C# I have seldomly used the new keyword to override methods like this. Not only is there very little reason to do so, but it breaks a core SOLID principle in the Liskov Principle :

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Here’s another one from the vault of “Huh, I guess I never thought I needed that until now”

Recently I was trying to write a unit test that required me to paste in some known JSON to validate against. Sure, I could load the JSON from a file, but I really don’t like File IO in unit tests. What it ended up looking like was something similar to :

var sample = "{\"PropertyA\":\"Value\"}";

Notice those really ugly backslashes in there trying to escape my quotes. I get this a lot when working with JSON or even HTML string literals and my main method for getting around it is loading into notepad with a quick find and replace.

Well, starting from C# 11, you can now do the following!

var sample = """

Notice those (ugly) three quote marks at the start and end. That’s the new syntax for “Raw String Literals”. Essentially allowing you to mix in unescaped characters without having to start backslashing like a madman.

Also supported is multi line strings like so :

var sample = """
    "PropertyA" : "Value"

While this feature is officially coming in C# 11 later this year, you can get a taste for it by adding the following to your csproj file.


I would say that editor support is not all too great now. The very latest Visual Studio 2022 seems to handle it fine, however inside VS Code I did have some issues (But it still compiled just fine).

One final thing to note is about the absence of the “tick” ` character. When I first heard about this feature, I just assumed it would use the tick character as it’s pretty synonymous with multi line raw strings (atleast in my mind). So I will include the discussion from Microsoft about whether they should use the tick character or not here :

With the final decision being

In keeping with C# history, I think " should continue to be the string literal delimiter

I’m less sure on that. I can’t say that three quote marks makes any more sense than a tick, especially when it comes to moving between languages so… We shall see if this lasts until the official release.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

User Secrets (Sometimes called Secret Manager) in .NET has been in there for quite some time now (I think since .NET Core 2.0). And I’ve always *hated* it. I felt like they encouraged developers to email/slack/teams individual passwords or even entire secret files to each other and call it secure. I also didn’t really see a reason why developers would have secrets locally that were not shared with the wider team. For that reason, a centralized secret storage such as Azure Keyvault was always preferable.

But over the past few months. I’ve grown to see their value… And in reality, I use User Secrets more for “this is how local development works on my machine”, rather than actual secrets. Let’s take a look at how User Secrets work and how they can be used, and then later on we can talk more about what I’ve been using them for.

Creating User Secrets via Visual Studio

By far the easiest way to use User Secrets is via Visual Studio. Right click your entry project and select “Manage User Secrets”.

Visual Studio will then work out the rest, installing any packages you require and setting up the secrets file! Easy!

You should be presented with an empty secrets file which we will talk about later.

Even if you use Visual Studio. I highly recommend at least reading the section below on how to do things from the command line. It will explain how things work behind the scenes and will likely explain away any questions you have about what Visual Studio is doing under the hood.

Creating User Secrets via Command Line

We can also create User Secrets via the command line! To do so, we need to run the following command in our project folder :

dotnet user-secrets init

The reality is, all this does is generate a guid and place it into your csproj file. It looks a bit like so :


If you really wanted, you could generate this guid yourself and place it here, there is nothing special about it *except* that between projects on your machine, the guid must be unique. Of course, if you wanted projects to share secrets then you could of course use the same guid across projects.

From here, you can now set secrets from the command line. It seems janky, but unfortunately you *must* create a secret via the command line before you can edit the secrets file in a notepad. It seems annoying but.. That’s how it works. So in your project folder run the following command :

dotnet user-secrets set "MySecret" "12345"

So.. What does this actually do? It’s quite simple actually. On Windows, you will have the following file :


And on Linux :


Opening this file, you’ll see something like :

    "MySecret" : "12345"

And from this point on you can actually edit this file in notepad and forget the command line all together. In reality, you could also even create this file manually and never use the command line to add the initial secret as well. But I just wanted to make note that the file *does not* exist until you add your first secret. And, as we will see later, if you have a user secret guid in your csproj file, but you don’t have the corresponding file, you’ll actually throw errors which is a bit frustrating.

With all of this, when you use Visual Studio, it essentially does all of this for you. But I still think it’s worth understanding where these secrets get stored, and how it’s just a local file on your machine. No magic!

Using User Secrets In .NET Configuration

User Secrets follow the same paradigm as all other configuration in .NET. So if you are using an appsettings.json, Azure Keyvault, Environment Variables etc. It’s all the same, even with User Secrets.

If you installed via the Command Line, or you just want to make sure you have the right packages, you will need to install the following nuget package :

Install-Package  Microsoft.Extensions.Configuration.UserSecrets

The next step is going to depend if you are using .NET 6 minimal API’s or .NET 5 style Startup classes. Either way, you probably by now understand where you are adding your configuration to your project.

For example, in my .NET 6 minimal API I have something that looks like so :

                     .AddUserSecrets(Assembly.GetExecutingAssembly(), true);

Notice I’m passing “true” as the second variable for UserSecrets. That’s because in .NET 6, User Secrets were made “required” by default and by passing true, we make them optional. This is important as if users have not set up the user secret file on their machine yet, this whole thing will blow up if not made optional. The exception will be something like :

System.IO.FileNotFoundException: The configuration file 'secrets.json' was not found and is not optional

Now, our User Secrets are being loaded into our configuration object. Ideally, we should place User Secrets *last* in our configuration pipeline because it means they will be the last overwrite to happen. And… That’s it! Pretty simple. But what sort of things will we put in User Secrets?

What Are User Secrets Good For?

I think contrary to the name, User Secrets are not good for Secrets at all, but instead user specific configuration. Let me give you an example. On a console application I was working with, all but one developer were using Windows machines. This worked great because we had local file path configuration, and this obviously worked smoothly on Windows. However, the Linux user was having issues. Originally, the developer would download the project, edit the appsettings, and run the project fine. When it came time to check in work, they would have to quickly revert or ignore the changes in appsettings so that they didn’t get pushed up. Of course, this didn’t always happen and while it was typically caught in code review, it did cause another round of branch switching, and changes to be pushed.

Now we take that same example and put User Secrets over the top. Now the Linux developer simply edits their User Secrets to change the file paths to suit their machine. They never touch appsettings.json at all, and everything works just perfectly.

Take another team I work with. They had in the past worked with a shared remote database in Azure for local development. This was causing all sorts of headaches when developers were writing or testing SQL migrations. Often their migrations would break other developers. Again, to not break the existing developers flow, I created User Secrets and showed the team how they could override the default SQL connection string to instead use their local development machine so we could slowly ween ourselves away from using a shared database.

Another example on a similar vein. The amount of times I’ve had developers install SQL server on their machine either as /SQLExpress or /MSSQLSERVER rather than a non-named instance. It happens all the time. Again, while I’m trying to help these developers out, sometimes it’s easier to just say, please add a user secret for your specific set up if you need it and we can resolve the issue later. It almost becomes an unblocking mechanism that developers can actually control their own configuration.

What I don’t think User Secrets are good for are actual secrets. So for example, while creating an emailing integration, a developer put a Sendgrid API key in their User Secrets. But what happens when he pushes that code up? Is he just going to email that secret to developers that need it? It doesn’t really make sense. So anything that needs to be shared, should not be in User Secrets at all.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Imagine an Ecommerce system that generates a unique order number each time a customer goes to the checkout. How would you generate this unique number?

You could :

  • Use the primary key from the database
  • Select the MAX order number from the database and increment by one
  • Write a custom piece of code that uses a table with a unique constraint to “reserve” numbers
  • Use SQL Sequences to generate unique values

As you’ve probably guessed by the title of this post, I want to touch on that final point because it’s a SQL Server feature that I think has gone a bit under the radar. I’ll be the first to admit that it doesn’t solve all your problems (See limitations at the end of this post), but you should know it exists and what it’s good for. Half the battle when choosing a solution is just knowing what’s out there after all!

SQL Sequences are actually a very simple and effective way to generate unique incrementing values in a threadsafe way. That means as your application scales, you don’t have to worry about two users clicking the “order” button on your ecommerce site at exactly the same time, and being given the exact same order number.

Getting Started With SQL Sequences

Creating a Sequence in SQL Server is actually very simple.


Given this syntax, it’s probably obvious to you the sort of different options you can do. You can for example, always increment by the sequence by 2 :


Or you can even descend instead of ascend :


And to get the next value, we just need to run SQL like :


It really is that simple! Not only that, you can view Sequences in SQL Management Studio as well (Including being able to create them, view the next value without actually requesting it etc). Simply look for the Sequences folder under Programmability.

Entity Framework Support

Of course you are probably using Entity Framework with .NET/SQL Server, so what about first class support there? Well.. It is supported but it’s not great.

To recreate our sequence as above, we would override the OnModelCreating of our DbContext (e.g. Where we would put all of our configuration anyway). And add the following :

protected override void OnModelCreating(ModelBuilder modelBuilder)
    modelBuilder.HasSequence("TestSequence", x => x.StartsAt(1).IncrementsBy(1));

That creates our sequence, but how about using it? Unfortunately, there isn’t really a thing to “get” the next value (For example if you needed it in application code). Most of the documentation revolves around using it as a default value for a column such as :

    .Property(o => o.OrderNo)
    .HasDefaultValueSql("NEXT VALUE FOR TestSequence");

If you are looking to simply retrieve the next number in the sequence and use it somewhere else in your application, unfortunately you will be writing raw SQL to achieve that. So not ideal.

With all of that being said however, if you use Entity Framework migrations as the primary way to manage your database, then the ability to at least create sequences via the ModelBuilder is still very very valuable.


When it comes to generating unique values for applications, my usage of SQL Sequences has actually been maybe about 50/50. 50% of the time it’s the perfect fit, but 50% of the time there are some heavy “limitations” that get in the way of it being actually useful.

Some of these limitations that I’ve run into include :

When you request a number from a sequence, no matter what happens from that point on, the sequence is incremented. Why this is important is imagine you are creating a Customer in the database, and you request a number from the sequence and get “156”. When you go to insert that Customer, a database constraint fails and the customer is not inserted. The next Customer will still be inserted as “157”, regardless of the previous insert failing. In short, sequences are not part of transactions and do not “lock and release” numbers. This is important because in some cases, you may not wish to have a “gap” in the sequence at all.

Another issue is that sequences cannot be “partitioned” in any way. A good example is a system I was building required unique numbers *per year*. And each year, the sequence would be reset. Unfortunately, orders could be backdated and therefore simply waiting until Jan 1st and resetting the sequence was not possible. What would be required is a sequence created for say the next 10 years, and for each of these be managed independently. It’s not too much of a headache, but it’s still another bit of overhead.

In a similar vein, multi tenancy can make sequences useless. If you have a single database in an ecommerce SAAS product supporting say 100 tenants. You cannot use a single sequence for all of them. You would need to create multiple sequences (One for each tenant), which again, can be a headache.

In short, sequences are good when you need a single number incrementing for your one tenant database. Anything more than that and you’re going to have to go with something custom or deal with managing several sequences at once, and selecting the right one at the right time with business logic.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post, we talked about how we could soft delete entities by setting up a DateDeleted column (Read that post here : But if you’ve ever done this (Or used a simple “IsDeleted” flag), you’ll know that it becomes a bit of a burden to always have the first line of your query go something like this :

dbSet.Where(x => x.DateDeleted == null);

Essentially, you need to remember to always be filtering out rows which have a DateDeleted. Annoying!

Microsoft have a great way to solve this with what’s called “Global Query Filters”. And the documentation even provides an example for how to ignore soft deletes in your code :

The problem with this is that it only gives examples on how to do this for each entity, one at a time. If your database has 30 tables, all with a DateDeleted flag, you’re going to have to remember to add the configuration each and every time.

In previous versions of Entity Framework, we could get around this by using “Conventions”. Conventions were a way to apply configuration to a broad set of Entities based on.. well.. conventions. So for example, you could say “If you see an IsDeleted boolean field on an entity, we always want to add a filter for that”. Unfortunately, EF Core does not have conventions (But it may land in EF Core 7). So instead, we have to do things a bit of a rinky dink way.

To do so, we just need to override the OnModelCreating to handle a bit of extra code (Of course we can extract this out to helper methods, but for simplicity I’m showing where it goes in our DBContext).

public class MyContext: DbContext
	protected override void OnModelCreating(ModelBuilder modelBuilder)
		foreach (var entityType in modelBuilder.Model.GetEntityTypes())
			//If the actual entity is an auditable type. 
				//This adds (In a reflection type way), a Global Query Filter
				//That always excludes deleted items. You can opt out by using dbSet.IgnoreQueryFilters()
				var parameter = Expression.Parameter(entityType.ClrType, "p");
				var deletedCheck = Expression.Lambda(Expression.Equal(Expression.Property(parameter, "DateDeleted"), Expression.Constant(null)), parameter);

What does this do?

Of course, we can use this same loop to add other “Conventions” too. Things like adding an Index to the DateDeleted field is possible via the OnModelCreating override.

Now, whenever we query the database, Entity Framework will automatically filter our soft deleted entities for us!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In any database schema, it’s extremely common to have the fields “DateCreated, DateUpdated and DateDeleted” on almost every entity. At the very least, they provide helpful debugging information, but further, the DateDeleted affords a way to “soft delete” entities without actually deleting them.

That being said, over the years I’ve seen some pretty interesting ways in which these have been implemented. The worst, in my view, is writing C# code that specifically updates the timestamp when created or updated. While simple, one clumsy developer later and you aren’t recording any timestamps at all. It’s very prone to “remembering” that you have to update the timestamp. Other times, I’ve seen database triggers used which.. works.. But then you have another problem in that you’re using database triggers!

There’s a fairly simple method I’ve been using for years and it involves utilizing the ability to override the save behaviour of Entity Framework.

Auditable Base Model

The first thing we want to do is actually define a “base model” that all entities can inherit from. In my case, I use a base class called “Auditable” that looks like so :

public abstract class Auditable
	public DateTimeOffset DateCreated { get; set; }
	public DateTimeOffset? DateUpdated { get; set; }
	public DateTimeOffset? DateDeleted { get; set; }

And a couple of notes here :

  • It’s an abstract class because it should only ever be inherited from
  • We use DateTimeOffset because we will then store the timezone along with the timestamp. This is a personal preference but it just removes all ambiguity around “Is this UTC?”
  • DateCreated is not null (Since anything created will have a timestamp), but the other two dates are! Note that if this is an existing database, you will need to allow nullables (And work out a migration strategy) as your existing records will not have a DateCreated.

To use the class, we just need to inherit from it with any Entity Framework model. For example, let’s say we have a Customer object :

public class Customer : Auditable
	public int Id { get; set; }
	public string Name { get; set; }

So all the class has done is mean we don’t have to copy and paste the same 3 date fields everywhere, and that it’s enforced. Nice and simple!

Overriding Context SaveChanges

The next thing is maybe controversial, and I know there’s a few different ways to do this. Essentially we are looking for a way to say to Entity Framework “Hey, if you insert a new record, can you set the DateCreated please?”. There’s things like Entity Framework hooks and a few nuget packages that do similar things, but I’ve found the absolute easiest way is to simply override the save method of your database context.

The full code looks something like :

public class MyContext: DbContext
	public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = default)
		var insertedEntries = this.ChangeTracker.Entries()
							   .Where(x => x.State == EntityState.Added)
							   .Select(x => x.Entity);
		foreach(var insertedEntry in insertedEntries)
			var auditableEntity = insertedEntry as Auditable;
			//If the inserted object is an Auditable. 
			if(auditableEntity != null)
				auditableEntity.DateCreated = DateTimeOffset.UtcNow;
		var modifiedEntries = this.ChangeTracker.Entries()
				   .Where(x => x.State == EntityState.Modified)
				   .Select(x => x.Entity);
		foreach (var modifiedEntry in modifiedEntries)
			//If the inserted object is an Auditable. 
			var auditableEntity = modifiedEntry as Auditable;
			if (auditableEntity != null)
				auditableEntity.DateUpdated = DateTimeOffset.UtcNow;
		return base.SaveChangesAsync(cancellationToken);

Now you’re context may have additional code, but this is the bare minimum to get things working. What this does is :

  • Gets all entities that are being inserted, checks if they inherit from auditable, and if so set the Date Created.
  • Gets all entities that are being updated, checks if they inherit from auditable, and is so set the Date Updated.
  • Finally, call the base SaveChanges method that actually does the saving.

Using this, we are essentially intercepting when Entity Framework would normally save all changes, and updating all timestamps at once with whatever is in the batch.

Handling Soft Deletes

Deletes are a special case for one big reason. If we actually try and call delete on an entity in Entity Framework, it gets added to the ChangeTracker as… well… a delete. And to unwind this at the point of saving and change it to an update would be complex.

What I tend to do instead is on my BaseRepository (Because.. You’re using one of those right?), I check if an entity is Auditable and if so, do an update instead. The copy and paste from my BaseRepository looks like so :

public async Task<T> Delete(T entity)
	//If the type we are trying to delete is auditable, then we don't actually delete it but instead set it to be updated with a delete date. 
	if (typeof(Auditable).IsAssignableFrom(typeof(T)))
		(entity as Auditable).DateDeleted = DateTimeOffset.UtcNow;
		_context.Entry(entity).State = EntityState.Modified;
	return entity;

Now your mileage may vary, especially if you are not using the Repository Pattern (Which you should be!). But in short, you must handle soft deletes as updates *instead* of simply calling Remove on the DbSet.

Taking This Further

What’s not shown here is that we can use this same methodology to update many other “automated” fields. We use this same system to track the last user to Create, Update and Delete entities. Once this is up and running, it’s often just a couple more lines to instantly gain traceability across every entity in your database!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This past week, NET 7 Preview 1 was released! By extension, this also means that Entity Framework 7 and ASP.NET Core 7 preview versions shipped at the same time.

So what’s new? In all honesty not a heck of a lot that will blow your mind! As with most Preview 1 releases, it’s mostly about getting that first version bump out of the way and any major blockers from the previous release sorted. So with that in mind, skimming the release notes I can see :

  • Progress continues on MAUI (The multi platform UI components for .NET), but we are still not at an RC (Although RC should be shipping with .NET 7)
  • Entity Framework changes are almost entirely bugs from the previous release
  • There is a slight push (And I’ve also seen this on Twitter), to merge in concepts from Orleans, or more broadly, having .NET 7 focus on quality of life improvements that lend itself to microservices or independent distributed applications (Expect to hear more about this as we get closer to .NET 7 release)
  • Further support for nullable reference types in various .NET libraries
  • Further support for file uploads and streams when building API’s using the Minimal API framework
  • Support for nullable reference types in MVC Views/Razor Pages
  • Performance improvements for header parsing in web applications

So nothing too ground breaking here. Importantly .NET 7 is labelled as a “Current” release which means it only receives 18 months of support. This is normal as Microsoft tend to alternate releases between Life Time Support and Current.

You can download .NET 7 Preview 1 here :

And you will require Visual Studio 2022 *Preview*!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Such is life on Twitter, I’ve been watching from afar .NET developers argue about a particular upcoming C# 11 feature, Parameter Null Checks. It’s actually just a bit of syntactic sugar to make it easier to throw argument null exceptions, but it’s caused a bit of a stir for two main reasons.

  1. People don’t like the syntax full stop. Which I understand, but other features such as some of the switch statement pattern matching and tuples look far worse! So in for a penny in for a pound!
  2. It somewhat clashes with another recent C# feature of Nullable Reference Types (We’ll talk more about this later).

The Problem

First let’s look at the problem this is trying to solve.

I may have a very simple method that takes a list of strings (As an example, but it could be any nullable type). I may want to ensure that whatever the method is given is not null. So we would typically write something like :

void MyMethod(List<string> input)
    if(input == null)
        throw new ArgumentNullException(nameof(input));

Nothing too amazing here. If the list is null, throw an ArgumentNullException!

In .NET 6 (Specifically .NET 6, not a specific version of C#), a short hand was added to save a few lines. So we could now do :

void MyMethod(List input)

There is no magic here. It’s just doing the same thing we did before with the null check, but wrapping it all up into a nice helper.

So what’s the problem? Well.. There isn’t one really. The only real issue is that should you have a method with many parameters, and all of them nullable, and yet you want to throw a ArgumentNullException, you might have an additional few lines at the start of your method. I guess that’s a problem to be solved, but it isn’t too much of a biggie.

Parameter Null Checking In C# 11

I put C# 11 here, but actually you can turn on this feature in C# 10 by adding the following to your csproj file :


Now we have a bit of sugar around null check by doing the following :

void MyMethod(List<string> input!!)

Adding the “!!” operator to a parameter name immediately adds an argument null check to it, skipping the need for the first few lines to be boilerplate null checks.

Just my personal opinion, it’s not… that bad. I think people see the use of symbols, such as ? or ! and they immediately get turned off. When using a symbol like this, especially one that isn’t universal across different languages (such as a ternary ?), it’s not immediately clear what it does. I’ve even seen some suggest just adding another keyword such as :

void MyMethod(notnull List<string> input)

I don’t think this is really any better to be honest.

Overall, it’s likely to see a little bit of use. But the interesting context of some of the arguments against this is….

Nullable Reference Types

For why I am totally wrong in all of the below, check this great comment from David. It explains why, while the below is true, it’s also not the full story and I am wrong in suggesting that you only need one or the other!

C#8 introduced the concept of Nullable Reference Types. Before this, all reference types were nullable by default, and so the above checks were essentially required. C#8 came along and gave a flag to say, if I want something to be nullable, I’ll let you know, otherwise treat everything as non nullable. You can read more about the feature here :

The interesting point here is that if I switch this flag on (And from .NET 6, it’s switched on by default in new projects), then there is no need for ArgumentNullExceptions because either the parameter is not null by default, or I specify that it can be null (And therefore won’t need the check).

Just as an example, with Nullables switched on using code :

#nullable enable
void MyMethod(List<string> input)
    //Input cannot be null anyway. So no need for the check. 

void MyMethod2(List<string>? input)
    //Using ? I've specified it can be null, and if I'm saying it can be null...
    //I won't be throwing exceptions when it is null right? 

There’s arguments that nullable reference types are a compile time check whereas throwing an exception is a runtime check. But the reality is they actually solve the same problem just in different ways, and if there is a push to do things one way (nullable reference types), then there’s no need for the other.

With all of that being said. Honestly, it’s a nice feature and I’m really not that fussed over it. The extent of my thinking is that it’s a handy little helper. That’s all.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.