An under the radar feature introduced in SQL Server 2016 (And available in Azure SQL also) is Temporal Tables. Temporal tables allow you to keep a “history” of all data within a SQL table, in a separate (but connected) history table. In very basic terms, every time data in a SQL table is updated, a copy of the original state of the data is cloned into the history table.

The use cases of this are pretty obvious, but include :

  • Ability to query what the state of data was at a specific time
  • Ability to select multiple sets of data between two time periods
  • Viewing how data changes over time (For example, feeding into an analytics or machine learning model)
  • An off the shelf, easy to use, auditing solution for tracking what changed when
  • And finally, a somewhat basic, but still practical disaster recovery scenario for applications going haywire

A big reason for me doing this post is that EF Core 6 has just been released, and includes built in support for temporal tables. While this post will just be a quick intro in how temporal tables work, in the future I’ll be giving a brief intro on getting set up with Entity Framework too!

Getting Started

When creating a new table, it’s almost trivial to add in temporal tables. If I was to create a Person table with two columns, a first and last name, it would look something like so :

    FirstName NVARCHAR(250) NOT NULL,
    LastName NVARCHAR(250) NOT NULL , 
    -- The below is how we turn on Temporal. 
    [ValidFrom] datetime2 (0) GENERATED ALWAYS AS ROW START,
    [ValidTo] datetime2 (0) GENERATED ALWAYS AS ROW END,
    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)

There is a couple of things to note here, the first is the last three lines of the CreateTable expression. We need to add the ValidFrom and ValidTo columns and the PERIOD line for everything to work nicely.

Second, it’s very important to note the HISTORY_TABLE statement. When I first started with temporal tables I assumed that there would be a naming convention along the lines of {{TableName}}History. But infact, if you don’t specify what the history table should be called, instead you just end up with a semi random generated name that doesn’t look great.

With this statement run, we end up with a table within a table when looking via SQL Management Studio. Something like so :

I will note that you can turn on temporal tables on an existing table too with an ALTER TABLE statement which is great for projects already on the go.

But here’s the most amazing part about all of this. Nothing about how you use a SQL table changes. For example, inserting a Person record is the same old insert statement as always :

INSERT INTO Person (FirstName, LastName)
VALUES ('Wade', 'Smith')

Our SQL statements for the most part don’t even have to know this is a temporal table at all. And that’s important because if we have an existing project, we aren’t going to run into consistency issues when trying to turn temporal tables on.

With the above insert statement, we end up with a record that looks like so :

The ValidFrom is the datetime we inserted, and obviously the ValidTo is set to maximum because for this particular record, it is valid for all of time (That will become important shortly).

Our PersonHistory table at this point is still empty. But let’s change that! Let’s do an Update statement like so :

SET LastName = 'G'
WHERE FirstName = 'Wade'

If we check our Person table, it looks largely the same as before, our ValidFrom date has shifted forward and Wade’s last name is G. But if we check our PersonHistory table :

We now have a record in here that tells us that between our two datetimes, the record with ID 1 had a last name of Smith.

Again, our calling code that updates our Person record doesn’t even have to know that temporal tables is turned on, and everything just works like clockwork encapsulated with SQL Server itself.

Modifying Table Structure

I wanted to point out another real convenience with temporal tables that you might not get if you decided to roll your own history table. After a table creation, what happens if you wanted to add a column to the table?

For example, let’s take our Person table and add a DateOfBirth column.


You’ll notice that I am only altering the Person table, and not touching the PersonHistory table. That’s because temporal tables automatically handle the alter table statements to also modify the underlying history table. So if I run the above, my history table also receives the update :

This is a huge feature because it means your two tables never get out of sync, and yet, it’s all abstracted away for you and you’ll never have to think about it!

Querying Temporal Tables

Of course, what happens if we actually want to query the history of our Person record? If we were rolling our own, we might have to do a union of our current Person table, and our PersonHistory table. But with temporal tables, it’s a single select statement and SQL Server will work out under the hood which table the data should come from.

Confused? Maybe these examples will help :

FROM Person 
FOR SYSTEM_TIME AS OF '2021-12-10 23:19:25'
WHERE Id = 1

I run the above statement to ask for the state of the Person record, with Id 1, at exactly a particular time. The code executes, and in my case, it pulls the record from the History table.

But let’s say I run it again with a different time :

FROM Person 
FOR SYSTEM_TIME AS OF '2022-01-01'
WHERE Id = 1

Here I’ve made it in the future, just to illustrate a point, but in this case I know it will pull the record from the Person table because it will be the very latest.

What I’m trying to demonstrate is that there is no switching between tables to try and work out which version was correct at the right time. SQL Server does it all for you!

Better yet, you’ll probably end up showing an audit history page somewhere on your web app if using temporal tables. For that we can use the BETWEEN statement like so :

FROM Person 
FOR SYSTEM_TIME BETWEEN '2021-01-01' AND '2022-01-01'
WHERE Id = 1

This then fetches all audit history *and* the current record if applicable between those time periods. Again, all hidden away under the hood for you and exposed as a very simple SYSTEM_TIME query statement.

Size Considerations

While all of this sounds amazing, there is one little caveat to a lot of this. And that’s data size footprint.

In general, you’ll have to think about how much data you are storing if your system generates many updates across a table. Due to the nature of temporal tables storing a copy of the data, many small updates could explode the size of your database. However, in a somewhat ironic twist, tables that receive many updates may be a good candidate for temporal table anyway for auditing history.

Another thing to think about is use of blob data types (text, nvarchar(max)), and even things such as nvarchar vs varchar. Considerations around these data types upfront could save a lot of data space in the long run when it’s duplicated across many historic rows.

There is no one size all approach that fits perfectly, but it is something to keep in mind!

Temporal Tables vs Event Sourcing

Let’s just get this out of the way, temporal tables and event sourcing are not drop in replacements for each other, nor are they really competing technologies.

A temporal table is somewhat rudimentary. It takes a copy of your data and stores it elsewhere on every update/delete operations. If we ask for a row at a specific point in time, we will receive what the data looked like at that point. And if we give a timeframe, we will be returned several copies of that data.

Event sourcing is more of a series of deltas that describe how the data was changed. The hint is the name (event), and it functions much the same as receiving events on a queue. Given a point in time, event sourcing can recreate the data by applying deltas up to that point, and given a timeframe, instead of receiving several copies of the data, we instead receive the deltas that were applied.

I think temporal tables work best when a simple copy of the data will do. For pure viewing purposes, maybe as a data administrator looking at how data looked at a certain point of time for application debugging and the like. Whereas event sourcing really is about structuring your application in an event driven way. It’s not a simple “switch” that you flick on to suddenly make your application work via event sourcing.

Temporal Tables vs Roll Your Own

Of course, history tables are a pretty common practice already. So why use Temporal Tables if you’ve already got your own framework set up?

I think it really comes down to ease of use and a real “switch and forget” mentality with temporal tables. Your application logic does not have to change at all, nor do you have to deal with messy triggers. It almost is an audit in a box type solution with very little overhead to set up and maintain. If you are thinking of adding an audit trail/historic log to an application, temporal tables will likely be the solution 99% of the time.

Entity Framework Support

As mentioned earlier, EF Core 6.0 shipped with temporal table support. That includes code first migrations for turning on temporal tables, and LINQ query extensions to make querying temporal tables a breeze. In the next post, we’ll give head first into how that works!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve run into this issue not only when migrating legacy projects to use async/await in C# .NET, but even just day to day on greenfields projects. The issue I’m talking about involves code that looks like so :

static async Task Main(string[] args)
    MyAsyncMethod(); // Oops I forgot to await this!
static async Task MyAsyncMethod()
    await Task.Yield();

It can actually be much harder to diagnose than you may think. Due to the way async/await works in C#, your async method may not *always* be awaited. If the async method completes before it has a chance to wait, then your code will actually work much the same as you expect. I have had this happen often in development scenarios, only for things to break only in test. And the excuse of “but it worked on my machine” just doesn’t cut it anymore!

In recent versions of .NET and Visual Studio, there is now a warning that will show to tell you your async method is not awaited. It gives off the trademark green squiggle :

And you’ll receive a build warning with the text :

CS4014 Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the ‘await’ operator to the result of the call.

The problem with this is that the warning isn’t always immediately noticeable. On top of this, a junior developer may not take heed of the warning anyway.

What I prefer to do is add a line to my csproj that looks like so :


This means that every async method that is not awaited will actually stop the build entirely.

Disabling Errors By Line

But what if it’s one of those rare times you actually do want to fire and forget (Typically for desktop or console applications), but now you’ve just set up everything to blow up? Worse still the error will show if you are inside an async method calling a method that returns a Task, even if the called method is not itself async.

But we can disable this on a line by line basis like so :

static async Task Main(string[] args)
    #pragma warning disable CS4014 
    MyAsyncMethod(); // I don't want to wait this for whatever reason, it's not even async!
    #pragma warning restore CS4014
static Task MyAsyncMethod()
    return Task.CompletedTask;

Non-Awaited Tasks With Results

Finally, the one thing I have not found a way around is like so :

static async Task Main(string[] args)
    var result = MyAsyncMethodWithResult();
    var newResult = result + 10;//Error because result is actually an integer. 
static async Task<int> MyAsyncMethodWithResult()
    await Task.Yield();
    return 0;

This code will actually blow up. The reason being that we expect the value of result to be an integer, but in this case because we did not await the method, it’s a task. But what if we pass the result to a method that doesn’t care about the type like so :

static async Task Main(string[] args)
    var result = MyAsyncMethodWithResult();
static async Task MyAsyncMethodWithResult()
    await Task.Yield();
    return 0;
static void DoSomethingWithAnObject(object myObj)

This will not cause any compiler warnings or errors (But it will cause runtime errors depending on what DoSomethingWithAnObject does with the value).

Essentially, I found that the warning/error for non awaited tasks is not shown if you assign the value to a variable. This is even the case with Tasks that don’t return a result like so :

static async Task Main(string[] args)
    var result = MyAsyncMethod(); // No error
static async Task MyAsyncMethod()
    await Task.Yield();

I have searched high and low for a solution for this but most of the time it leads me to stack overflow answers that go along the lines of “Well, if you assigned the value you MIGHT actually want the Task as a fire and forget”. Which I agree with, but 9 times out of 10, is not going to be the case.

That being said, turning the compiler warnings to errors will catch most of the errors in your code, and the type check system should catch 99% of the rest. For everything else… “Well it worked on my machine”.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was recently helping another developer understand the various “OnDelete” behaviors of Entity Framework Core. That is, when a parent entity in a parent/child relationship is deleted, what should happen to the child?

I thought this was actually all fairly straight forward. The way I understood things was :

DeleteBehavior.Cascade – Delete the child when the parent is deleted (e.g. Cascading deletes)
DeleteBehavior.SetNull – Set the FK on the child to just be null (So allow orphans)
DeleteBehavior.Restrict – Don’t allow the parent to be deleted at all

I’m pretty sure if I asked 100 .NET developers what these meant, there is a fairly high chance that all of them would answer the same way. But in reality, DeleteBehavior.Restrict is actually dependant on what you’ve done in that DBContext up until the delete… Let me explain.

Setting Up

Let’s imagine that I have two models in my database, they look like so :

class BlogPost
	public int Id { get; set; }
	public string PostName { get; set; }
	public ICollection<BlogImage> BlogImages { get; set; }
class BlogImage
	public int Id { get; set; }
	public int? BlogPostId { get; set; }
	public BlogPost? BlogPost { get; set; }
	public string ImageUrl { get; set; }

Then imagine the relationship in EF Core is set up like so :

    .HasOne(x => x.BlogPost)
    .WithMany(x => x.BlogImages)

Any developer looking at this at first glance would say, if I delete a blog post that has images pointing to it, it should stop me from deleting the blog post itself. But is that true?

Testing It Out

Let’s imagine I have a simple set of code that looks like do :

var context = new MyContext();
var blogPost = new BlogPost
	PostName = "Post 1", 
	BlogImages = new List<BlogImage>
		new BlogImage
			ImageUrl = "/foo.png"
Console.WriteLine("Blog Post Added");
var getBlogPost = context.Find<BlogPost>(blogPost.Id);
context.SaveChanges(); //Does this error here? We are deleting the blog post that has images
Console.WriteLine("Blog Post Removed");

Do I receive an exception? The answer is.. No. When this code is run, and I check the database I end up with a BlogImage that looks like so :

So instead of restricting the delete, EF Core has gone ahead and set the BlogPostId to be null, and essentially given me an orphaned record. But why?!

Diving headfirst into the documentation we can see that DeleteBehavior.Restrict has the following description :

For entities being tracked by the DbContext, the values of foreign key properties in dependent entities are set to null when the related principal is deleted. This helps keep the graph of entities in a consistent state while they are being tracked, such that a fully consistent graph can then be written to the database. If a property cannot be set to null because it is not a nullable type, then an exception will be thrown when SaveChanges() is called.

Emphasis mine.

This doesn’t really make that much sense IMO. But I wanted to test it out further. So I used the following test script, which is exactly the same as before, except half way through I recreate the DB Context. Given the documentation, the entity I pull back for deletion will not have the blog images themselves being tracked.

And sure enough given this code :

var context = new MyContext();
var blogPost = new BlogPost
	PostName = "Post 1", 
	BlogImages = new List<BlogImage>
		new BlogImage
			ImageUrl = "/foo.png"
Console.WriteLine("Blog Post Added");
context = new MyContext(); // <-- Create a NEW DB context
var getBlogPost = context.Find<BlogPost>(blogPost.Id);
Console.WriteLine("Blog Post Removed");

I *do* get the exception I was expecting all along :

SqlException: The DELETE statement conflicted with the REFERENCE constraint “FK_BlogImages_BlogPosts_BlogPostId”.

Still writing this, I’m struggling to understand the logic here. If by some chance you’ve already loaded the child entity (By accident or not), your delete restriction suddenly behaves completely differently. That doesn’t make sense to me.

I’m sure some of you are ready to jump through your screens and tell me that this sort of ambiguity is because I am using a nullable FK on my BlogImage type. Which is true, and does mean that I expect that a BlogImage entity *can* be an orphan. If I set this to be a non-nullable key, then I will always get an exception because it cannot set the FK to null. However, the point I’m trying to make is that if I have a nullable key, but I set the delete behavior to restrict, I should still see some sort of consistent behavior.

What About DeleteBehavior.SetNull?

Another interesting thing to note is that the documentation for DeleteBehavior.SetNull is actually identical to that of Restrict :

For entities being tracked by the DbContext, the values of foreign key properties in dependent entities are set to null when the related principal is deleted. This helps keep the graph of entities in a consistent state while they are being tracked, such that a fully consistent graph can then be written to the database. If a property cannot be set to null because it is not a nullable type, then an exception will be thrown when SaveChanges() is called.

And yet, in my testing, using SetNull does not depend on which entities are being tracked by the DbContext, and works the same every time (Although, I did consider that possibly this is a SQL Server function using the default value rather than EF Core doing the leg work).

I actually spent a long time using Google-Fu to try and find anyone talking about the differences between SetNull and Restrict but, many just go along with what I described in the intro. SetNull sets null when it came, and restrict always stops you from deleting.


Maybe I’m in the minority here, or maybe there is a really good reason for the restrict behavior acting as it does, but I really do think that for the majority of developers, when they use DeleteBehavior.Restrict, they are expecting the parent to be blocked from being deleted in any and all circumstances. I don’t think anyone expects an accidental load of an entity into the DbContext to suddenly change the behavior. Am I alone in that?


I opened an issue on Github asking if all of the above is intended behavior :

It’s early days yet but the response is :

EF performs “fixup” to keep the graph of tracked entities consistent when operations are performed on those entities. This includes nulling nullable foreign key properties when the principal that they reference is marked as Deleted. [..]

It is uncommon, but if you don’t want EF to do this fixup to dependent entities when a principal is deleted, then you can set DeleteBehavior.ClientNoAction. Making this change in the code you posted above will result in the database throwing with the message above in both cases, since an attempt is made to delete a principal while a foreign key constraint is still referencing it.

Further on, this is explained more :

Setting Restrict or NoAction in EF Core tells EF Core that the database foreign key constraint is configured this way, and, when using migrations, causes the database foreign key constraint to be created in this way. What it doesn’t do is change the fixup behavior of EF Core; that is, what EF does to keep entities in sync when the graph of tracked entities is changed. This fixup behavior has been the same since legacy EF was released in 2008. For most, it is a major advantage of using an OR/M.

Starting with EF Core, we do allow you to disable this fixup when deleting principal entities by specifying ClientNoAction. The “client” here refers to what EF is doing to tracked entities on the client, as opposed to the behavior of the foreign key constraint in the database. But is is uncommon to do this; most of the time the fixup behavior helps keep changes to entities in sync.

This actually does make a little bit of sense. The “fixup” being disconnected from what is happening in the database. Do I think it’s “intuitive”? Absolutely not. But atleast we have some reasoning for the way it is.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

A big reason people who develop in .NET languages rave about Visual Studio being the number one IDE, is auto complete and intellisense features. Being able to see what methods/properties are available on a class or scrolling through overloads of a particular method are invaluable. While more lightweight IDE’s like VS Code are blazingly fast.. I usually end up spending a couple of hours setting up extensions to have it function more like Visual Studio anyway!

That being said. When I made the switch to Visual Studio 2022, there was something off but I couldn’t quite put my finger on it. I actually switched a couple of times back to Visual Studio 2019, because I felt more “productive”. I couldn’t quite place it until today.

What I saw was this :

Notice that intellisense has been extended to also predict entire lines, not just the complete of the method/type/class I am currently on. At first this felt amazing but then I started realizing why this was frustrating to use.

  1. The constant flashing of the entire line subconsciously makes me stop and read what it’s suggesting to see if I’ll use it. Maybe this is just something I would get used to but I noticed myself repeatedly losing my flow or train of thought to read the suggestions. Now that may not be that bad until you realize…
  2. The suggestions are often completely non-sensical when working on business software. Take the above suggestion, there is no type called “Category”. So it’s actually suggesting something that should I accept, will actually break anyway.
  3. Even if you don’t accept the suggestions, my brain subconsciously starts typing what they suggest, and therefore end up with broken code regardless.
  4. And all of the above is made even worse because the suggestions completely flip non-stop. In a single line, and even at times following it’s lead, I get suggested no less than 4 different types.

Here’s a gif of what I’m talking about with all 4 of the issues present.


Now maybe I’ll get used to the feature but until then, I’m going to turn it all off. So if you are like me and want the same level of intellisense that Visual Studio 2019 had, you need to go :

Tools -> Options -> Intellicode (Not intellisense!)

Then disable the following :

  • Show completions for whole lines of code
  • Show completions on new lines

After disabling these, restart Visual Studio and you should be good to go!

Again, this only affects the auto-complete suggestions for complete lines. It doesn’t affect completing the type/method, or showing you a method summary etc.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Sorry for an absolute mouthful of a post title. I couldn’t find any better way to describe it! In August 2021, Github removed the support for password authentication with GIT repositories. What that essentially means is that if you were previously using your actual Github username/password combination when using GIT (Both private and public repositories), you’re probably going to see the following error :

remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead. remote: Please see for more information. fatal: unable to access “…” : The requested URL returned error: 403

The above error message links to a post that goes into some detail about why this change has been made. But in short, using access tokens instead of your actual Github password benefits you because :

  • The token is unique and is not re-used across websites
  • The token can be generated per device/per use
  • The token can be revoked at any point in time if leaked, and will only affect those using that specific token instead of the entire Github account
  • The scope of the token can be limited to only allow certain actions (e.g. Only allow code commit but not edit the user account)
  • The token itself is random and isn’t subject to things like dictionary attacks

All sounds pretty good right! Now while the error message is surprisingly helpful, it doesn’t actually go into details on how to switch to using personal access tokens. So hopefully this should go some way to helping!

Generating A Personal Access Token On Github

This is the easy part. Simply go to the following URL :, and hit Generate New Token. You’ll be asked which permissions you want to give your new token. The main permissions are going to be around repository actions like so :

The expiration is up to you however a short duration means that you’ll have to run through this process again when the token runs out.

Hitting generate at the bottom of the page will generate your token and show it to you once. You cannot view this token again so be ready to use it! And that’s it, you have your new personal access token! But.. Where to stick it?

Removing Old GIT Passwords

This is the part that took me forever. While I had the new access token, my GIT client of choice (SourceTree) never prompted me to enter it. This is where things go slightly haywire. I’m going to give some hints where to go, and what I did for Sourcetree on Windows, but you’ll need to vary your instructions depending on which client and OS you are using.

The first place to check on Windows is the Credential Manager. Simply type Credential Manager into your start bar, open the Credential Manager, then switch to “Windows Credentials” like so :

You’ll be shown a set of credentials in this list that have been saved. Your GIT credentials may be in this list. If they are, simply delete them, then continue on. If not then we need to delve into how our specific GIT client actually stored passwords.

For Sourcetree that means going to the following folder on Windows : C:\Users\<username>\AppData\Local\Atlassian\SourceTree, and finding a file simply titled “passwd”. Open this, find your GIT credentials and delete them.

Again, your mileage is always going to vary on this step. The main point is that you need to find your credential cache for your GIT client, and delete your old credentials. That’s it!

Entering Your Access Token

In your GIT client, simply pull/push your code and you should be prompted to enter your new credentials because, with the last step, we just deleted the stored credentials we had previously.

Simple enter your Github Username with your Personal Access Token in place of your password. That’s it! Your access token essentially functions like your password in terms of what your GIT client thinks it’s doing, so it’s nice and easy!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In .NET Core 3.0, we were introduced to the concept of publishing an application as a single exe file and around that time, I wrote a bit of a guide on it here.

But there was a couple of things in that release that didn’t sit well with people. The main issues were :

  • The single file exe was actually a self extracting zip that unzipped to a temp location and then executed. This at times created issues in terms of security or on locked down machines.
  • The file size was astronomical (70MB for a Hello World), although I was at pains to say that this includes the entire .NET Core runtime so the target machine didn’t need any pre-requisites to be installed.

Since then, there have been some incremental increases but I want to talk about where we are at today when it comes to Single File Apps in .NET 6.

Single File Publish Basics

I’ve created a console application that does nothing but :

using System;
Console.WriteLine("Hello, World!");

Other than this, it is a completely empty project created in Visual Studio.

To publish this as a single executable, I can open a terminal in my project folder and run :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true

I’ll note that when you are publishing a single file you *must* include the target OS type as the exe is bundled specifically for that OS.

If I then open the folder at C:\MyProject\bin\Release\net6.0\win-x64\publish, I should now see a single EXE file ready to go.

Of course this is a pretty dang large file still clocking in at 60MB (Although down slightly from my last attempt which was over 70mb!).

But to show you that this is hardly .NET’s fault, I’m going to run the following :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained false

Notice that I set –self-contained false to not bundle the entire .NET runtime with my application. This does mean that the runtime needs to be installed on the target machine however. Executing this I can now get down to a puny 150KB.

It’s a trade off for sure. I remember selling C# utilities 15 odd years ago, and it was a bit of a hassle making sure that everyone had the .NET Framework runtime installed. So for me, size isn’t that much of a tradeoff to not have to deal with any runtime problems (Even different versions of the runtime).

I’ll also note that these publish flags can be added to the csproj file instead, but I personally had issues debugging my application with many of them on. And to be fair as well, how I publish an application I may not want to be included in the application itself. But investigate it more if that’s your thing.

Self Extracting Zip No More

As mentioned earlier, a big complaint of the old “Single EXE” type deployments from .NET Core was that it was nothing more than a zip file with everything in it. Still nice but.. Maybe not as magic as first thought. In .NET 6, for the most part, this has been changed to a true single file experience where everything is loaded into memory, rather than extracted into temporary folders.

I’ll note that there are flags for you to use the legacy experience of doing a self extract but.. Why would you?

IL Trimming

Much like self contained deployments in .NET Core 3, .NET 6 has the ability to trim unneeded dependencies from your application. By default, when you publish a self contained application you get everything and the kitchen sink. But by using .NET’s “trimming” functionality, you can remove dependencies from the runtime that you aren’t actually using.

This can lead to unintended consequences though as it’s not always possible for the compiler to know which dependencies you are and aren’t using (For example if you are using reflection). The process in .NET 6 is much the same as it has always been so if you want more information, feel free to go back to this previous article to learn more :

That being said, IL Trimming has said to have improved in this latest version of .NET so let’s give it a spin!

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true -p:PublishTrimmed=true

I want to note that in .NET Core 3, using this flag we went down to about 20MB in size. This time around?

We are now down to 10MB. Again, I want to point out that this is a self contained application with no runtime necessary.

Enabling Compression

Let’s say that we still want to use self contained deployments, but we are aiming for an even smaller file size. Well starting with .NET 6, we now have the ability to enable compression on our single file apps to squeeze even more disk space.

Let’s just compress the single file app *without* trimming. So we run the following :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true -p:EnableCompressionInSingleFile=true

So we were starting with around 60MB file size and we are now at :

A tidy 30MB savings. Really not that bad!

Compressing your single file app does come with a startup cost. Your application may be slower to load so again, a small tradeoff.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

.NET 6 has reached General Availability as of today! That means what’s in is in, and what’s out is out. You can go grab the latest SDK/Runtime from right here :

The official release post is available here :

But of course, I’ll jot down some of the more important cliff notes!

  • .NET 6 also includes a myriad of changes to C# 10, many of which are outlined in a previous post here :
  • Visual Studio 2022 has also been released and is available here :
  • ASP.NET Core also gets a new release with it’s own announcement here :
  • .NET 6 is a LTS, that means 3 years of support. This may not sound much but it is important. .NET 3.0 for example was not LTS and was only supported for 6 months (insane for a major version number if you ask me but anyway…)
  • Hot Reload has made it into both Visual Studio 2022 *and* the CLI (For why the CLI is important, see here :
  • .NET MAUI *did not* make it into .NET 6 (Even though many posts reference this). MAUI stands for “Multi Platform App UI”. Essentially the next cross platform UI for mobile and desktop (Xamarin style). Unfortunately I think the team bit off a little more than they could chew, so this will be coming a bit later.
  • Azure Functions (If that’s your thing) supports .NET 6 same day. This is actually huge because Azure Functions did not support .NET 5 (Ugh, don’t ask!).

All in all, a pretty solid release going forward. .NET Releases for a while have had little tag lines, and .NET 6 has “The Fastest .NET Yet”. I would say that that’s been an ongoing trend from the .NET team where each release has gotten little tune ups along the way. I’m going to be writing a bit more in the future about some of the lesser known features (For example, Single File Apps got a massive upgrade recently!), but until then dive in and have a peek yourself!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

On the 29th of June 2021, Github Copilot was released to much fanfare. In the many gifs/videos/screenshots from developers across the world, it showed essentially an intellisense on steroids type experience where entire methods could be auto typed for you. It was pretty amazing but I did wonder how much of it was very very specific examples where it shined, and how much it could actually do to help day to day programming.

So off I went to download the extension and….

Access to GitHub Copilot is limited to a small group of testers during the technical preview of GitHub Copilot. If you don’t have access to the technical preview, you will see an error when you try to use this extension.

Boo! But that’s OK. I thought maybe they would open things up fairly quickly. So I signed up at

Now some 4 months later, I’m finally in! Dreams can come true! If you haven’t yet, sign up to the technical preview and cross your fingers for access soon. But for now you will have to settle for this little write up here!

What do I want to test? Well of the many examples I saw of Github Copilot, I can’t recall any being of C# .NET. There may be many reasons for that, but it does interest me because I think over the course of years, the style in which we develop C# has heavily changed. Just as an example, prior to .NET 4.5, we didn’t have the async/await keywords. But it would almost be impossible to find C# code written today that didn’t use asynchronous programming in one way or another. But if Github Copilot is using the full gamut of C# code written in the past 20 years, it may well fall into bad habits of yesteryear.

Let’s try it out anyway!

Add Two Numbers Together

My first attempt at using Copilot was a simple test. Adding two numbers together. Here’s how Copilot shows you what it thinks you want to do :

You can then press tab to accept the intellisense, much like you normally would in Visual Studio. There can be a bit of a lag between typing and the suggestions showing. It doesn’t “feel” like intellisense like you would with Visual Studio because of this. But it’s still responsive if you want to stop for a second or two and see what it thinks you should do.

Fetch Web Page Contents

Next, I wanted to see if I could trip it up a little by using an async method, but the actual logic was simple. All I wanted to do was to fetch a webpage, and return the string contents. Here’s what it gave me :

Now again, not rocket science, but pretty impressive. It knew that the method itself was async (So it could use await), and that we were using a string so it should return the entire contents.

Let’s say I wanted to get fancy though. Instead of returning the string contents of the page, I want to use HtmlAgilityPack (A common HTML parser) to return the parsed HtmlDocument. Now the thing here is, I 100% thought it was going to load the webpage as normal, and then try and parse it into a document. But instead :

Apparently HtmlAgilityPack has a class called “HtmlWeb” that does this for you. I never would have even known this without Github Copilot. This impressed the bejesus out of me because while yes, it’s a pretty common task, it’s also a third party library that I thought for sure would trip it up.

Code From Comments

One of the most impressive things I saw in Copilot demos was the ability to write a comment, and have it understand what code to write based on that comment alone. Again, this is something that I thought for sure would trip it up but instead :

OK, a pretty simple example but it completely blew my mind that from the comment alone, it knew what to do.

So instead, I tried to break it. I started with the following code. It doesn’t give too much away (I think).

static async Task Main(string[] args)
  var searchQuery = args[0];
  // Go to google and use the search query to find the first result

And this time, Copilot fed me line by line to end up like so :

static async Task Main(string[] args)
  var searchQuery = args[0];
  // Go to google and use the search query to find the first result
  var url = $"{searchQuery}";
  var client = new HttpClient();
  var html = await client.GetStringAsync(url);
  // Find the first result
  var startIndex = html.IndexOf("<h3 class=\"r\">");
  var endIndex = html.IndexOf("</h3>", startIndex);
  var result = html.Substring(startIndex, endIndex - startIndex);
  // Print the result

Again, All I did was accept every suggestion by pressing Tab over and over, I did not modify this code at all. And it’s actually pretty good! There’s probably better ways to do this with Regex and/or HtmlAgilityPack, but this is actually pretty nice.

Algorithm Code

This is where I think I’ve seen Copilot shine the most. If you know “I need the algorithm that does XYZ”, Copilot is pretty dang efficient in whipping those up for you.

Here it is writing a quick Bubblesort for you based off nothing but the method name and input :


Sometime back, I wrote about the knapsack algorithm ( Mostly I was fascinated because I had seen the problem come up a lot in programming competitions and I always got stuck. So I took my code from there and started typing it as I did there and…

Legitimately, I think this thing could break programming competitions at times. Imagine the common “Traveling Salesman Problem” or pathfinding as part of a competition. You could almost just write the problem as a comment and let Copilot do the rest.

Github Copilot Final Verdict

I’ll be honest, I came into this being fairly sceptical. Having never had my hands on the actual tool, I had read a lot of other news articles that were fairly polarising. Either Copilot is the next great thing, or it’s just a gimmick. And I have to admit, I think Copilot really does have a place. To me, it’s more than a gimmick. You still need to be a programmer to use it, it’s not going to write code for you, but I think some people out there are really under selling this. It’s pretty fantastic. Be sure to sign up to the technical preview to get your hands on it and try it out for yourself!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Want to learn more about Hot Reload? Check out the quick intro here :

There’s been a bit of a hoopla lately around Microsoft pulling a feature from the upcoming .NET 6 release. The feature in question is “Hot Reload”. I’m going to do a post in the future that goes more in depth as to what hot reload can and can’t do, but it’s pretty darn impressive and works in ways you actually don’t expect it to.

The cliff notes for what hot reload does is that it allows you to apply code changes to a running application *without* recompiling and restarting the application. If you’ve used the dotnet watch command before, it’s similar, expect that with dotnet watch it recompiles the entire application and restarts. Hot reload literally applies the changes right there and then as the application is running.

To give you an idea of just how bananas this is, here’s a gif of a console application below. I have it outputting “ABC” every 1 second. I then go ahead and change the text output, and hit the Hot Reload button on the toolbar and without the application closing, my changes are applied in realtime without skipping a bit. Pretty incredible!

(Click to view the full size GIF)

So, amazing feature, works well, what’s all the kerfuffle about?

It all started with the following blog post :

Most notably this quote :

With these considerations, we’ve decided that starting with the upcoming .NET 6 GA release, we will enable Hot Reload functionality only through Visual Studio 2022 so we can focus on providing the best experiences to the most users. We’ll also continue to pursue adding Hot Reload to Visual Studio for Mac in a future release. […] To clarify, we are not releasing Hot Reload as a feature of the dotnet watch tool

What Microsoft is saying here is that the hot reload functionality will be only available through Visual Studio 2022, not via Command Line, Visual Studio For Mac or VS Code. At first, this doesn’t seem like such a big deal right? A feature has been built for Visual Studio and not for anything else. It happens.

But there’s a problem. It actually does exist on the command line for preview versions of .NET 6, but this PR removes the feature : So what we’re seeing is that Microsoft actively pulled a feature to make it available only in Visual Studio, and not on any other platform.

It’s also somewhat doubly annoying because back in May, the support for hot reload via dotnet watch was actually announced itself in a blog post :

Today, we are excited to introduce you to the availability of the .NET Hot Reload experience in Visual Studio 2019 version 16.11 (Preview 1) and through the dotnet watch command-line tooling in .NET 6 (Preview 4).

Now there’s Github issues being raised getting hammered with commends (, and even a PR that attempts to revert the changes ( under the guise of “Well if .NET is open source, then you have to accept this change”.

I usually avoid any sort of pitchfork display, so let’s tone down the verbiage a bit and actually just break down what’s happened :

  • In May, it was announced that dotnet watch and Visual Studio 2019 would have Hot Reload and was available now (And presumably Visual Studio 2022 would soon follow)
  • In October, it was announced that Visual Studio 2022 would be the only developer tool to receive Hot Reload, and a PR was created to remove hot reload from dotnet watch.
  • Follow up posts mention that Hot Reload itself maybe wasn’t production ready inside dotnet watch, or that Microsoft wanted to focus their efforts on hot reload being the best it could be in Visual Studio 2022, but developers feel like it’s simply a way to push people towards Visual Studio.
  • Given that Visual Studio is Windows Only, it implies that other operating systems will not receive hot reload in the near future, if at all. Especially if you do not use Visual Studio for Mac.
  • dotnet watch itself is *not* removed (I saw some people confused about this). The typical “dotnet watch build” dev flow is still applicable, it will just recompile your code instead of using hot reload.

So what’s going to happen? Microsoft will revert the decision. Almost certainly. At the very least you will see the feature being put behind an “experimental” flag on dotnet watch. A few years ago there was a similar issue on whether ASP.NET Core 2.0 should support .NET Framework as a runtime, Microsoft said no (Which I agreed with by the way!), community said yes, Microsoft said yes.

UPDATE : And a day later, Hot Reload is back in the CLI for .NET 6.

Want to learn more about Hot Reload? Check out the quick intro here :

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Over the past few months, I’ve been publishing posts around new features inside .NET 6 and C# 10. I put those as two separate feature lanes but in reality, they somewhat blur together now as a new release of .NET generally means a new release of C#. And features built inside .NET, are typically built on the back of new C# 10 features.

That being said, I thought it might be worthwhile doing a recap of the features I’m most excited about. This is not an exhaustive list of every single feature we should expect come release time in November, but instead, a nice little retrospective on what’s coming, and what it means going forward as a C#/.NET Developer.

Minimal API Framework

The new Minimal API framework is in full swing, and allows you to build an API without the huge ceremony of startup files. If you liked the approach in NodeJS of “open the main.js file and go”, then you’ll like the new Minimal API framework. I highly suggest that everyone take a look at this feature because I suspect it’s going to become very very popular given the modern day love for microservices architectures.

DateOnly and TimeOnly Types

This is a biggie in my opinion. The ability to now specify types as being *only* a date or *only* a time is huge. No more rinky dink coding around using a DateTime with no time portion for example.

LINQ OrDefault Enhancements

Not as great as it sounds on the tin, but being able to specify what exactly the “OrDefault” will return as a default can be handy in some cases. S

Implicit Using Statements

Different project types can now implicitly import using statements globally so you don’t have to. e.g. No more writing “using System;” at the top of every single file. However, this particular feature has slightly been walked back to not be turned on by default. Still interesting none the less.

IEnumerable Chunk

Much handier than it sounds at first glance. More sugar than anything, but the ability for the framework to handle “chunking” a collection for you will see a lot of use in the future.

SOCKS Proxy Support

Somewhat surprisingly, .NET has never supported SOCKS proxies until now. I can’t say I’ve ever run into this issue myself, but I could definitely see this being a right pain when you are half way down a project build and realize that you can’t use SOCKS. But it’s here now atleast!

Priority Queue

Another feature that is surprising it’s never been here till now. The ability to have a priority on queue items will be a huge help to many. This is likely to see a whole heap of use in the coming years.


How have we lived without this until now? The ability to find the “max” of a property on a complex object, but then return the complete object. Replaces the cost of doing a full order by then picking the first item. Very handy!

Global Using Statements

The feature that makes Implicit Using Statements possible. Essentially the ability to declare a using statement once in your project, and not have to clutter the top of every single file importing the exact same things over and over again. Will see use from day 1.

File Scoped Namespaces

More eye candy than anything. Being able to declare a namespace without braces services to save you one tab to the right.

What’s Got You Excited?

For me, I’m super pumped about the minimal API framework. The low ceremony is just awesome for quick API’s that need be shipped yesterday. Besides that, I think the DateOnly and TimeOnly will see a tonne of use from Day 1, and I imagine that new .NET developers won’t even think twice that we went 20 odd years with only DateTime.

How about you? What are you excited about?

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.