I’ve run into this issue not only when migrating legacy projects to use async/await in C# .NET, but even just day to day on greenfields projects. The issue I’m talking about involves code that looks like so :

static async Task Main(string[] args)
{
    MyAsyncMethod(); // Oops I forgot to await this!
}

static async Task MyAsyncMethod()
{
    await Task.Yield();
}

It can actually be much harder to diagnose than you may think. Due to the way async/await works in C#, your async method may not *always* be awaited. If the async method completes before it has a chance to wait, then your code will actually work much the same as you expect. I have had this happen often in development scenarios, only for things to break only in test. And the excuse of “but it worked on my machine” just doesn’t cut it anymore!

In recent versions of .NET and Visual Studio, there is now a warning that will show to tell you your async method is not awaited. It gives off the trademark green squiggle :

And you’ll receive a build warning with the text :

CS4014 Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the ‘await’ operator to the result of the call.

The problem with this is that the warning isn’t always immediately noticeable. On top of this, a junior developer may not take heed of the warning anyway.

What I prefer to do is add a line to my csproj that looks like so :

<PropertyGroup>
    <WarningsAsErrors>CS4014;</WarningsAsErrors>
</PropertyGroup>

This means that every async method that is not awaited will actually stop the build entirely.

Disabling Errors By Line

But what if it’s one of those rare times you actually do want to fire and forget (Typically for desktop or console applications), but now you’ve just set up everything to blow up? Worse still the error will show if you are inside an async method calling a method that returns a Task, even if the called method is not itself async.

But we can disable this on a line by line basis like so :

static async Task Main(string[] args)
{
    #pragma warning disable CS4014 
    MyAsyncMethod(); // I don't want to wait this for whatever reason, it's not even async!
    #pragma warning restore CS4014
}

static Task MyAsyncMethod()
{
    return Task.CompletedTask;
}

Non-Awaited Tasks With Results

Finally, the one thing I have not found a way around is like so :

static async Task Main(string[] args)
{
    var result = MyAsyncMethodWithResult();
    var newResult = result + 10;//Error because result is actually an integer. 
}

static async Task<int> MyAsyncMethodWithResult()
{
    await Task.Yield();
    return 0;
}

This code will actually blow up. The reason being that we expect the value of result to be an integer, but in this case because we did not await the method, it’s a task. But what if we pass the result to a method that doesn’t care about the type like so :

static async Task Main(string[] args)
{
    var result = MyAsyncMethodWithResult();
    DoSomethingWithAnObject(result);
}

static async Task MyAsyncMethodWithResult()
{
    await Task.Yield();
    return 0;
}

static void DoSomethingWithAnObject(object myObj)
{
}

This will not cause any compiler warnings or errors (But it will cause runtime errors depending on what DoSomethingWithAnObject does with the value).

Essentially, I found that the warning/error for non awaited tasks is not shown if you assign the value to a variable. This is even the case with Tasks that don’t return a result like so :

static async Task Main(string[] args)
{
    var result = MyAsyncMethod(); // No error
}

static async Task MyAsyncMethod()
{
    await Task.Yield();
}

I have searched high and low for a solution for this but most of the time it leads me to stack overflow answers that go along the lines of “Well, if you assigned the value you MIGHT actually want the Task as a fire and forget”. Which I agree with, but 9 times out of 10, is not going to be the case.

That being said, turning the compiler warnings to errors will catch most of the errors in your code, and the type check system should catch 99% of the rest. For everything else… “Well it worked on my machine”.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was recently helping another developer understand the various “OnDelete” behaviors of Entity Framework Core. That is, when a parent entity in a parent/child relationship is deleted, what should happen to the child?

I thought this was actually all fairly straight forward. The way I understood things was :

DeleteBehavior.Cascade – Delete the child when the parent is deleted (e.g. Cascading deletes)
DeleteBehavior.SetNull – Set the FK on the child to just be null (So allow orphans)
DeleteBehavior.Restrict – Don’t allow the parent to be deleted at all

I’m pretty sure if I asked 100 .NET developers what these meant, there is a fairly high chance that all of them would answer the same way. But in reality, DeleteBehavior.Restrict is actually dependant on what you’ve done in that DBContext up until the delete… Let me explain.

Setting Up

Let’s imagine that I have two models in my database, they look like so :

class BlogPost
{
	public int Id { get; set; }
	public string PostName { get; set; }
	public ICollection<BlogImage> BlogImages { get; set; }
}

class BlogImage
{
	public int Id { get; set; }
	public int? BlogPostId { get; set; }
	public BlogPost? BlogPost { get; set; }
	public string ImageUrl { get; set; }
}

Then imagine the relationship in EF Core is set up like so :

modelBuilder.Entity<BlogImage>()
    .HasOne(x => x.BlogPost)
    .WithMany(x => x.BlogImages)
    .OnDelete(DeleteBehavior.Restrict);

Any developer looking at this at first glance would say, if I delete a blog post that has images pointing to it, it should stop me from deleting the blog post itself. But is that true?

Testing It Out

Let’s imagine I have a simple set of code that looks like do :

var context = new MyContext();
context.Database.Migrate();

var blogPost = new BlogPost
{
	PostName = "Post 1", 
	BlogImages = new List<BlogImage>
	{
		new BlogImage
		{
			ImageUrl = "/foo.png"
		}
	}
};

context.Add(blogPost);
context.SaveChanges();

Console.WriteLine("Blog Post Added");

var getBlogPost = context.Find<BlogPost>(blogPost.Id);
context.Remove(getBlogPost);
context.SaveChanges(); //Does this error here? We are deleting the blog post that has images

Console.WriteLine("Blog Post Removed");

Do I receive an exception? The answer is.. No. When this code is run, and I check the database I end up with a BlogImage that looks like so :

So instead of restricting the delete, EF Core has gone ahead and set the BlogPostId to be null, and essentially given me an orphaned record. But why?!

Diving headfirst into the documentation we can see that DeleteBehavior.Restrict has the following description :

For entities being tracked by the DbContext, the values of foreign key properties in dependent entities are set to null when the related principal is deleted. This helps keep the graph of entities in a consistent state while they are being tracked, such that a fully consistent graph can then be written to the database. If a property cannot be set to null because it is not a nullable type, then an exception will be thrown when SaveChanges() is called.

Emphasis mine.

This doesn’t really make that much sense IMO. But I wanted to test it out further. So I used the following test script, which is exactly the same as before, except half way through I recreate the DB Context. Given the documentation, the entity I pull back for deletion will not have the blog images themselves being tracked.

And sure enough given this code :

var context = new MyContext();
context.Database.Migrate();

var blogPost = new BlogPost
{
	PostName = "Post 1", 
	BlogImages = new List<BlogImage>
	{
		new BlogImage
		{
			ImageUrl = "/foo.png"
		}
	}
};

context.Add(blogPost);
context.SaveChanges();

Console.WriteLine("Blog Post Added");

context = new MyContext(); // <-- Create a NEW DB context

var getBlogPost = context.Find<BlogPost>(blogPost.Id);
context.Remove(getBlogPost);
context.SaveChanges();

Console.WriteLine("Blog Post Removed");

I *do* get the exception I was expecting all along :

SqlException: The DELETE statement conflicted with the REFERENCE constraint “FK_BlogImages_BlogPosts_BlogPostId”.

Still writing this, I’m struggling to understand the logic here. If by some chance you’ve already loaded the child entity (By accident or not), your delete restriction suddenly behaves completely differently. That doesn’t make sense to me.

I’m sure some of you are ready to jump through your screens and tell me that this sort of ambiguity is because I am using a nullable FK on my BlogImage type. Which is true, and does mean that I expect that a BlogImage entity *can* be an orphan. If I set this to be a non-nullable key, then I will always get an exception because it cannot set the FK to null. However, the point I’m trying to make is that if I have a nullable key, but I set the delete behavior to restrict, I should still see some sort of consistent behavior.

What About DeleteBehavior.SetNull?

Another interesting thing to note is that the documentation for DeleteBehavior.SetNull is actually identical to that of Restrict :

For entities being tracked by the DbContext, the values of foreign key properties in dependent entities are set to null when the related principal is deleted. This helps keep the graph of entities in a consistent state while they are being tracked, such that a fully consistent graph can then be written to the database. If a property cannot be set to null because it is not a nullable type, then an exception will be thrown when SaveChanges() is called.

And yet, in my testing, using SetNull does not depend on which entities are being tracked by the DbContext, and works the same every time (Although, I did consider that possibly this is a SQL Server function using the default value rather than EF Core doing the leg work).

I actually spent a long time using Google-Fu to try and find anyone talking about the differences between SetNull and Restrict but, many just go along with what I described in the intro. SetNull sets null when it came, and restrict always stops you from deleting.

Conclusion

Maybe I’m in the minority here, or maybe there is a really good reason for the restrict behavior acting as it does, but I really do think that for the majority of developers, when they use DeleteBehavior.Restrict, they are expecting the parent to be blocked from being deleted in any and all circumstances. I don’t think anyone expects an accidental load of an entity into the DbContext to suddenly change the behavior. Am I alone in that?

Update

I opened an issue on Github asking if all of the above is intended behavior : https://github.com/dotnet/efcore/issues/26857

It’s early days yet but the response is :

EF performs “fixup” to keep the graph of tracked entities consistent when operations are performed on those entities. This includes nulling nullable foreign key properties when the principal that they reference is marked as Deleted. [..]

It is uncommon, but if you don’t want EF to do this fixup to dependent entities when a principal is deleted, then you can set DeleteBehavior.ClientNoAction. Making this change in the code you posted above will result in the database throwing with the message above in both cases, since an attempt is made to delete a principal while a foreign key constraint is still referencing it.

Further on, this is explained more :

Setting Restrict or NoAction in EF Core tells EF Core that the database foreign key constraint is configured this way, and, when using migrations, causes the database foreign key constraint to be created in this way. What it doesn’t do is change the fixup behavior of EF Core; that is, what EF does to keep entities in sync when the graph of tracked entities is changed. This fixup behavior has been the same since legacy EF was released in 2008. For most, it is a major advantage of using an OR/M.

Starting with EF Core, we do allow you to disable this fixup when deleting principal entities by specifying ClientNoAction. The “client” here refers to what EF is doing to tracked entities on the client, as opposed to the behavior of the foreign key constraint in the database. But is is uncommon to do this; most of the time the fixup behavior helps keep changes to entities in sync.

This actually does make a little bit of sense. The “fixup” being disconnected from what is happening in the database. Do I think it’s “intuitive”? Absolutely not. But atleast we have some reasoning for the way it is.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Sorry for an absolute mouthful of a post title. I couldn’t find any better way to describe it! In August 2021, Github removed the support for password authentication with GIT repositories. What that essentially means is that if you were previously using your actual Github username/password combination when using GIT (Both private and public repositories), you’re probably going to see the following error :

remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead. remote: Please see https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information. fatal: unable to access “…” : The requested URL returned error: 403

The above error message links to a post that goes into some detail about why this change has been made. But in short, using access tokens instead of your actual Github password benefits you because :

  • The token is unique and is not re-used across websites
  • The token can be generated per device/per use
  • The token can be revoked at any point in time if leaked, and will only affect those using that specific token instead of the entire Github account
  • The scope of the token can be limited to only allow certain actions (e.g. Only allow code commit but not edit the user account)
  • The token itself is random and isn’t subject to things like dictionary attacks

All sounds pretty good right! Now while the error message is surprisingly helpful, it doesn’t actually go into details on how to switch to using personal access tokens. So hopefully this should go some way to helping!

Generating A Personal Access Token On Github

This is the easy part. Simply go to the following URL : https://github.com/settings/tokens, and hit Generate New Token. You’ll be asked which permissions you want to give your new token. The main permissions are going to be around repository actions like so :

The expiration is up to you however a short duration means that you’ll have to run through this process again when the token runs out.

Hitting generate at the bottom of the page will generate your token and show it to you once. You cannot view this token again so be ready to use it! And that’s it, you have your new personal access token! But.. Where to stick it?

Removing Old GIT Passwords

This is the part that took me forever. While I had the new access token, my GIT client of choice (SourceTree) never prompted me to enter it. This is where things go slightly haywire. I’m going to give some hints where to go, and what I did for Sourcetree on Windows, but you’ll need to vary your instructions depending on which client and OS you are using.

The first place to check on Windows is the Credential Manager. Simply type Credential Manager into your start bar, open the Credential Manager, then switch to “Windows Credentials” like so :

You’ll be shown a set of credentials in this list that have been saved. Your GIT credentials may be in this list. If they are, simply delete them, then continue on. If not then we need to delve into how our specific GIT client actually stored passwords.

For Sourcetree that means going to the following folder on Windows : C:\Users\<username>\AppData\Local\Atlassian\SourceTree, and finding a file simply titled “passwd”. Open this, find your GIT credentials and delete them.

Again, your mileage is always going to vary on this step. The main point is that you need to find your credential cache for your GIT client, and delete your old credentials. That’s it!

Entering Your Access Token

In your GIT client, simply pull/push your code and you should be prompted to enter your new credentials because, with the last step, we just deleted the stored credentials we had previously.

Simple enter your Github Username with your Personal Access Token in place of your password. That’s it! Your access token essentially functions like your password in terms of what your GIT client thinks it’s doing, so it’s nice and easy!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In .NET Core 3.0, we were introduced to the concept of publishing an application as a single exe file and around that time, I wrote a bit of a guide on it here.

But there was a couple of things in that release that didn’t sit well with people. The main issues were :

  • The single file exe was actually a self extracting zip that unzipped to a temp location and then executed. This at times created issues in terms of security or on locked down machines.
  • The file size was astronomical (70MB for a Hello World), although I was at pains to say that this includes the entire .NET Core runtime so the target machine didn’t need any pre-requisites to be installed.

Since then, there have been some incremental increases but I want to talk about where we are at today when it comes to Single File Apps in .NET 6.

Single File Publish Basics

I’ve created a console application that does nothing but :

using System;

Console.WriteLine("Hello, World!");
Console.ReadLine();

Other than this, it is a completely empty project created in Visual Studio.

To publish this as a single executable, I can open a terminal in my project folder and run :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true

I’ll note that when you are publishing a single file you *must* include the target OS type as the exe is bundled specifically for that OS.

If I then open the folder at C:\MyProject\bin\Release\net6.0\win-x64\publish, I should now see a single EXE file ready to go.

Of course this is a pretty dang large file still clocking in at 60MB (Although down slightly from my last attempt which was over 70mb!).

But to show you that this is hardly .NET’s fault, I’m going to run the following :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained false

Notice that I set –self-contained false to not bundle the entire .NET runtime with my application. This does mean that the runtime needs to be installed on the target machine however. Executing this I can now get down to a puny 150KB.

It’s a trade off for sure. I remember selling C# utilities 15 odd years ago, and it was a bit of a hassle making sure that everyone had the .NET Framework runtime installed. So for me, size isn’t that much of a tradeoff to not have to deal with any runtime problems (Even different versions of the runtime).

I’ll also note that these publish flags can be added to the csproj file instead, but I personally had issues debugging my application with many of them on. And to be fair as well, how I publish an application I may not want to be included in the application itself. But investigate it more if that’s your thing.

Self Extracting Zip No More

As mentioned earlier, a big complaint of the old “Single EXE” type deployments from .NET Core was that it was nothing more than a zip file with everything in it. Still nice but.. Maybe not as magic as first thought. In .NET 6, for the most part, this has been changed to a true single file experience where everything is loaded into memory, rather than extracted into temporary folders.

I’ll note that there are flags for you to use the legacy experience of doing a self extract but.. Why would you?

IL Trimming

Much like self contained deployments in .NET Core 3, .NET 6 has the ability to trim unneeded dependencies from your application. By default, when you publish a self contained application you get everything and the kitchen sink. But by using .NET’s “trimming” functionality, you can remove dependencies from the runtime that you aren’t actually using.

This can lead to unintended consequences though as it’s not always possible for the compiler to know which dependencies you are and aren’t using (For example if you are using reflection). The process in .NET 6 is much the same as it has always been so if you want more information, feel free to go back to this previous article to learn more : https://dotnetcoretutorials.com/2019/06/27/the-publishtrimmed-flag-with-il-linker/

That being said, IL Trimming has said to have improved in this latest version of .NET so let’s give it a spin!

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true -p:PublishTrimmed=true

I want to note that in .NET Core 3, using this flag we went down to about 20MB in size. This time around?

We are now down to 10MB. Again, I want to point out that this is a self contained application with no runtime necessary.

Enabling Compression

Let’s say that we still want to use self contained deployments, but we are aiming for an even smaller file size. Well starting with .NET 6, we now have the ability to enable compression on our single file apps to squeeze even more disk space.

Let’s just compress the single file app *without* trimming. So we run the following :

dotnet publish -p:PublishSingleFile=true -r win-x64 -c Release --self-contained true -p:EnableCompressionInSingleFile=true

So we were starting with around 60MB file size and we are now at :

A tidy 30MB savings. Really not that bad!

Compressing your single file app does come with a startup cost. Your application may be slower to load so again, a small tradeoff.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

On the 29th of June 2021, Github Copilot was released to much fanfare. In the many gifs/videos/screenshots from developers across the world, it showed essentially an intellisense on steroids type experience where entire methods could be auto typed for you. It was pretty amazing but I did wonder how much of it was very very specific examples where it shined, and how much it could actually do to help day to day programming.

So off I went to download the extension and….

Access to GitHub Copilot is limited to a small group of testers during the technical preview of GitHub Copilot. If you don’t have access to the technical preview, you will see an error when you try to use this extension.

Boo! But that’s OK. I thought maybe they would open things up fairly quickly. So I signed up at https://copilot.github.com/

Now some 4 months later, I’m finally in! Dreams can come true! If you haven’t yet, sign up to the technical preview and cross your fingers for access soon. But for now you will have to settle for this little write up here!

What do I want to test? Well of the many examples I saw of Github Copilot, I can’t recall any being of C# .NET. There may be many reasons for that, but it does interest me because I think over the course of years, the style in which we develop C# has heavily changed. Just as an example, prior to .NET 4.5, we didn’t have the async/await keywords. But it would almost be impossible to find C# code written today that didn’t use asynchronous programming in one way or another. But if Github Copilot is using the full gamut of C# code written in the past 20 years, it may well fall into bad habits of yesteryear.

Let’s try it out anyway!

Add Two Numbers Together

My first attempt at using Copilot was a simple test. Adding two numbers together. Here’s how Copilot shows you what it thinks you want to do :

You can then press tab to accept the intellisense, much like you normally would in Visual Studio. There can be a bit of a lag between typing and the suggestions showing. It doesn’t “feel” like intellisense like you would with Visual Studio because of this. But it’s still responsive if you want to stop for a second or two and see what it thinks you should do.

Fetch Web Page Contents

Next, I wanted to see if I could trip it up a little by using an async method, but the actual logic was simple. All I wanted to do was to fetch a webpage, and return the string contents. Here’s what it gave me :

Now again, not rocket science, but pretty impressive. It knew that the method itself was async (So it could use await), and that we were using a string so it should return the entire contents.

Let’s say I wanted to get fancy though. Instead of returning the string contents of the page, I want to use HtmlAgilityPack (A common HTML parser) to return the parsed HtmlDocument. Now the thing here is, I 100% thought it was going to load the webpage as normal, and then try and parse it into a document. But instead :

Apparently HtmlAgilityPack has a class called “HtmlWeb” that does this for you. I never would have even known this without Github Copilot. This impressed the bejesus out of me because while yes, it’s a pretty common task, it’s also a third party library that I thought for sure would trip it up.

Code From Comments

One of the most impressive things I saw in Copilot demos was the ability to write a comment, and have it understand what code to write based on that comment alone. Again, this is something that I thought for sure would trip it up but instead :

OK, a pretty simple example but it completely blew my mind that from the comment alone, it knew what to do.

So instead, I tried to break it. I started with the following code. It doesn’t give too much away (I think).

static async Task Main(string[] args)
{
  var searchQuery = args[0];
  // Go to google and use the search query to find the first result
}

And this time, Copilot fed me line by line to end up like so :

static async Task Main(string[] args)
{
  var searchQuery = args[0];
  // Go to google and use the search query to find the first result
  var url = $"https://www.google.com/search?q={searchQuery}";
  var client = new HttpClient();
  var html = await client.GetStringAsync(url);
  // Find the first result
  var startIndex = html.IndexOf("<h3 class=\"r\">");
  var endIndex = html.IndexOf("</h3>", startIndex);
  var result = html.Substring(startIndex, endIndex - startIndex);
  // Print the result
  Console.WriteLine(result);
}

Again, All I did was accept every suggestion by pressing Tab over and over, I did not modify this code at all. And it’s actually pretty good! There’s probably better ways to do this with Regex and/or HtmlAgilityPack, but this is actually pretty nice.

Algorithm Code

This is where I think I’ve seen Copilot shine the most. If you know “I need the algorithm that does XYZ”, Copilot is pretty dang efficient in whipping those up for you.

Here it is writing a quick Bubblesort for you based off nothing but the method name and input :

Amazing!

Sometime back, I wrote about the knapsack algorithm (https://dotnetcoretutorials.com/2020/04/22/knapsack-algorithm-in-c/). Mostly I was fascinated because I had seen the problem come up a lot in programming competitions and I always got stuck. So I took my code from there and started typing it as I did there and…

Legitimately, I think this thing could break programming competitions at times. Imagine the common “Traveling Salesman Problem” or pathfinding as part of a competition. You could almost just write the problem as a comment and let Copilot do the rest.

Github Copilot Final Verdict

I’ll be honest, I came into this being fairly sceptical. Having never had my hands on the actual tool, I had read a lot of other news articles that were fairly polarising. Either Copilot is the next great thing, or it’s just a gimmick. And I have to admit, I think Copilot really does have a place. To me, it’s more than a gimmick. You still need to be a programmer to use it, it’s not going to write code for you, but I think some people out there are really under selling this. It’s pretty fantastic. Be sure to sign up to the technical preview to get your hands on it and try it out for yourself!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Over the past few months, I’ve been publishing posts around new features inside .NET 6 and C# 10. I put those as two separate feature lanes but in reality, they somewhat blur together now as a new release of .NET generally means a new release of C#. And features built inside .NET, are typically built on the back of new C# 10 features.

That being said, I thought it might be worthwhile doing a recap of the features I’m most excited about. This is not an exhaustive list of every single feature we should expect come release time in November, but instead, a nice little retrospective on what’s coming, and what it means going forward as a C#/.NET Developer.

Minimal API Framework

The new Minimal API framework is in full swing, and allows you to build an API without the huge ceremony of startup files. If you liked the approach in NodeJS of “open the main.js file and go”, then you’ll like the new Minimal API framework. I highly suggest that everyone take a look at this feature because I suspect it’s going to become very very popular given the modern day love for microservices architectures.

https://dotnetcoretutorials.com/2021/07/16/building-minimal-apis-in-net-6/

DateOnly and TimeOnly Types

This is a biggie in my opinion. The ability to now specify types as being *only* a date or *only* a time is huge. No more rinky dink coding around using a DateTime with no time portion for example.

https://dotnetcoretutorials.com/2021/09/07/dateonly-and-timeonly-types-in-net-6/

LINQ OrDefault Enhancements

Not as great as it sounds on the tin, but being able to specify what exactly the “OrDefault” will return as a default can be handy in some cases. S

https://dotnetcoretutorials.com/2021/09/02/linq-ordefault-enhancements-in-net-6/

Implicit Using Statements

Different project types can now implicitly import using statements globally so you don’t have to. e.g. No more writing “using System;” at the top of every single file. However, this particular feature has slightly been walked back to not be turned on by default. Still interesting none the less.

https://dotnetcoretutorials.com/2021/08/31/implicit-using-statements-in-net-6/

IEnumerable Chunk

Much handier than it sounds at first glance. More sugar than anything, but the ability for the framework to handle “chunking” a collection for you will see a lot of use in the future.

https://dotnetcoretutorials.com/2021/08/12/ienumerable-chunk-in-net-6/

SOCKS Proxy Support

Somewhat surprisingly, .NET has never supported SOCKS proxies until now. I can’t say I’ve ever run into this issue myself, but I could definitely see this being a right pain when you are half way down a project build and realize that you can’t use SOCKS. But it’s here now atleast!

https://dotnetcoretutorials.com/2021/07/11/socks-proxy-support-in-net/

Priority Queue

Another feature that is surprising it’s never been here till now. The ability to have a priority on queue items will be a huge help to many. This is likely to see a whole heap of use in the coming years.

https://dotnetcoretutorials.com/2021/03/17/priorityqueue-in-net/

MaxBy/MinBy

How have we lived without this until now? The ability to find the “max” of a property on a complex object, but then return the complete object. Replaces the cost of doing a full order by then picking the first item. Very handy!

https://dotnetcoretutorials.com/2021/09/09/maxby-minby-in-net-6/

Global Using Statements

The feature that makes Implicit Using Statements possible. Essentially the ability to declare a using statement once in your project, and not have to clutter the top of every single file importing the exact same things over and over again. Will see use from day 1.

https://dotnetcoretutorials.com/2021/08/19/global-using-statements-in-c10/

File Scoped Namespaces

More eye candy than anything. Being able to declare a namespace without braces services to save you one tab to the right.

https://dotnetcoretutorials.com/2021/09/20/file-scoped-namespaces-in-c-10/

What’s Got You Excited?

For me, I’m super pumped about the minimal API framework. The low ceremony is just awesome for quick API’s that need be shipped yesterday. Besides that, I think the DateOnly and TimeOnly will see a tonne of use from Day 1, and I imagine that new .NET developers won’t even think twice that we went 20 odd years with only DateTime.

How about you? What are you excited about?

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Let me just start off by saying that YAML itself is not that popular in either C# or .NET. For a long time, under .NET Framework, XML seemed to rein supreme with things like csproj files, solution files and even msbuild configurations all being XML driven. That slowly changed to be more JSON friendly, and I think we could all agree that things like NewtonSoft.Json/JSON.NET had a huge impact on pretty much every .NET developer using JSON these days.

That being said, there are times when you need to parse YAML. Recently, on a project that involved other languages such as Go, Python and PHP, YAML was chosen as a shared configuration type between all languages. Don’t get me started on why this was the case…. But it happened. And so if you are stuck working out how to parse YAML files in C#, then this guide is for you.

Introducing YamlDotNet

In .NET, there is no support for reading or writing YAML files out of the box. Unlike things like JSON Serializer and XML Serializers, you aren’t able to rely on Microsoft for this one. Luckily, there is a nuget package that is more or less the absolute standard when it comes to working with YAML in C#. YamlDotNet.

To install it, from our package manager console we just have to run :

Install-Package YamlDotNet

And we are ready to go!

Deserializing YAML To A POCO

Deserializing YAML directly to a POCO is actually simple!

Let’s say we have a YAML file that looks like so :

databaseConnectionString: Server=.;Database=myDataBase;
uploadFolder: /uploads/
approvedFileTypes : [.png, .jpeg, .jpg]

And we then have a plain C# class that is set up like the following :

class Configuration
{ 
    public string DatabaseConnectionString { get; set; }
    public string UploadFolder { get; set; }
    public List<string> ApprovedFileTypes { get; set; }
}

The code to deserialize this is just a few lines long :

var deserializer = new YamlDotNet.Serialization.DeserializerBuilder()
    .WithNamingConvention(CamelCaseNamingConvention.Instance)
    .Build();

var myConfig = deserializer.Deserialize<Configuration>(File.ReadAllText("config.yaml"));

Easy right! But I do want to point out one big caveat to all of this. YamlDotNet by default is *not* case insensitive. Infact, it’s actually somewhat frustrating that you must match the casing perfectly. Maybe that’s just me being spoiled with JSON.NET’s excellent case insensitivity, but it is annoying here.

You must use one of the following :

  • CamelCase Naming
  • Hyphenated Naming
  • LowerCase Naming
  • PascalCase Naming
  • Underscored Naming
  • Or simply have the YAML match the casing of your properties exactly

But you can’t mix up casing that easily unfortunately.

YamlDotNet does have a “YamlMember” attribute that works much the same as JsonProperty in JSON.NET. However, you must also override the ApplyNamingConventions property to be false for it to really work properly. e.g. If in my YAML I have “Database_ConnectionString”, I need to apply an alias *as well* as remove the camel case naming convention otherwise it will look for “database_ConnectionString”.

[YamlMember(Alias = "Database_ConnectionString", ApplyNamingConventions = false)]
public string DatabaseConnectionString { get; set; }

Deserializing YAML To A Dynamic Object

If you check my guide on parsing JSON, you’ll notice I talk about things like JObject, JsonPath, dynamic JTokens etc. Basically, ways to read a JSON File, without having the structured class to deserialize into.

In my brief time working with YamlDotNet, it doesn’t seem to have the same functionality. It looks to be either you serialize into a class, or nothing at all. There are some work arounds however, you can for example deserialize into a dynamic object.

dynamic myConfig = deserializer.Deserialize<ExpandoObject>(File.ReadAllText("config.yaml"));

But, it isn’t quite the same as the ability to use things like JsonPath to find deep seated nodes. What I will say is that that probably has more to do with where and how JSON is used vs YAML. It’s generally going to be pretty rare to have a YAML file be hundreds or thousands of lines long (Although not unheard of), so the need for things like JsonPath is maybe in the edge case territory.

Serializing A C# Object To YAML

Writing a C# object into YAML is actually pretty straight forward. If we take our simple C# configuration class we had before :

class Configuration
{ 
    public string DatabaseConnectionString { get; set; }
    public string UploadFolder { get; set; }
    public List<string> ApprovedFileTypes { get; set; }
}

We can do everything in just 4 lines :

var config = new Configuration();

var serializer = new SerializerBuilder()
    .WithNamingConvention(CamelCaseNamingConvention.Instance)
    .Build();

var stringResult = serializer.Serialize(config);

I’ll note a couple of things about the serializing/writing process :

  • If an object is null, it will still be serialized (with an empty value), but you can override this if you want
  • Naming Convention is uber important here obviously, with the default being whatever casing your properties are in with your C# code

But outside of those points, it’s really straight forward and just works a treat.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


This is probably going to be my final post on new features in C# 10 (Well, before I do a roundup of everything C# 10 and .NET 6 related). But it doesn’t mean this post is any less useful. Infact, this one hits a very special place in my heart.

For a little bit of a story. Back in 2006-ish, I wanted to learn a new programming language. I was a teenager all hyped up on computers and making various utilities, mostly revolving around MSN Messenger auto replies and the like. I had mastered Pascal to a certain degree, and had moved on to Delphi. There was this new thing called “.NET” and a language called C# – and since anything starting with a C was clearly amazing in the programming world, I went down that rabbit hole.

I convinced a family member to buy me a C# tutorial book *I think* from Microsoft, I can’t exactly remember. I do remember it having a “tool” on the front so I can only presume it was this one or another in the series : https://www.amazon.com/Microsoft%C2%AE-Visual-2005-Step-Developer/dp/0735621292. Eagerly, I opened the book and inserted the CD Rom that came with it. And I can still remember my heart sinking.

Your operating system is not compatible

For reasons unknown to me at the time, I was using Windows ME. Quite possibly the worst operating system known to man. I mean, we didn’t have have a lot of money. It was a 1GHZ, 256MB RAM machine, Windows ME was the best we could do at the time. And so.. I was stuck. The CD ROM wouldn’t work, so I couldn’t install Visual Studio (These were days before broadband/ADSL for me), and so I did what any kid would do. I just read the book instead and took notes that someday I hoped I could use when writing C# code. Literally, I couldn’t even write C# code on my PC, and instead I just wrote it on paper and “pretended” it would work first time and I was learning. Ugh.

However, the actual point of the story is this. The first chapter of the blimmin book had the driest introduction to namespaces you could imagine. I thought maybe we could ease into “integers vs strings” or a nice “if statement”, but nope, let’s talk about how namespaces work. I just remember it being sooo off putting. And 15 years later, if a new programmer asked me to teach them C#, I would probably not even mention namespaces in the first month.

So with that story done, let’s look at the actual feature….

Introducing File Scoped Namespaces

We can take a namespace scoped class like so :

namespace MyNamespace.Services
{
    class MyClass
    {

    }
}

But in C# 10, we can now remove the additional parenthesis and have the code look like so :

namespace MyNamespace.Services;

class MyClass
{

}

And that’s… kinda it. It’s done for no other reason than it removes an additional level of indenting that really isn’t needed in this day and age. It just presumes that whatever is inside that file (hence file scoped) is all within the same namespace. I can’t think of a time literally in 15 years that I have ever had more than 1 namespace in the same file. So this addition to C# really does make sense.

Visual Studio 2019 vs 2022

I just want to put a huge caveat when using this feature. For a couple of months now, I tried this feature out in every .NET 6 preview SDK release. And each time I couldn’t get it to work, but I kept seeing people talk about it.

As it turns out, for whatever reason, I could not get this feature to work in Visual Studio 2019 (And actually, Minimal APIs in .NET 6 had similar issues), but it worked first try in Visual Studio 2022. So if you are getting errors such as :

{ expected

Then you probably need to try it inside Visual Studio 2022.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


Reading through the new features introduced in .NET 6, something that I repeatedly glossed over was the new MaxBy and MinBy LINQ statements. I kept glossing over them because at first look, it doesn’t seem to be anything new but in reality.. They are actually extremely helpful!

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Max/MinBy vs Max/Min

So it’s actually brutally simple.

I can run a simple Max statement like so across a list of numeric values :

List<int> myList = new List<int> { 1, 2, 3 };
myList.Max(); //Returns 3

Seems simple, and we output the maximum numeric value.

If I have a complex type, like a class of “Person”, then you might think I need to do something rinky dink to find the max age of those people (as shown below). I actually at first thought this was what “MaxBy” was all about. Like “Max By Property XYZ”. But infact, the regular max statement works just fine like so :

List<Person> people = new List<Person>
{
    new Person
    {
        Name = "John Smith", 
        Age = 20
    }, 
    new Person
    {
        Name = "Jane Smith", 
        Age = 30
    }
};

Console.Write(people.Max(x => x.Age)); //Outputs 30

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

So what happens if we change that same statement to MaxBy?

List<Person> people = new List<Person>
{
    new Person
    {
        Name = "John Smith", 
        Age = 20
    }, 
    new Person
    {
        Name = "Jane Smith", 
        Age = 30
    }
};

Console.Write(people.MaxBy(x => x.Age)); //Outputs Person (Jane Smith)

Instead of outputting the max property value, we output the entire object that has the max value. So instead of returning “30” we return the entire Person object of Jane Smith.

Another way to look at it, is it’s more or less a shortening of the following statement :

people.OrderByDescending(x => x.Age).First(); //Outputs Person (Jane Smith)

This is the crux of MaxBy/MinBy and is a pretty nifty addition that I totally didn’t know I needed until now.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


For a long time now (Actually, almost since forever), we have been stuck with the DateTime type in .NET. By “stuck”, it hasn’t been like it’s always been a problem, and you can go months, if not years of using DateTime without any issue. But every now and again, those gremlins creep in when trying to use DateTime as either only a Date, or only a Time.

In the case of using only a Date, this is extremely common when storing things such as birthdays, anniversary dates, or really any calendar date that doesn’t have a specific time associated with it. But in .NET, you were forced to use a DateTime object, often with a time portion of midnight. While this worked, often the time portion would get in the way when comparing dates or wanting to manipulate a date without really worrying about the time. It was especially weird when working with a SQL database that has the concept of a “date” type (with no time), but when loading the data into C#, you had to deal with a DateTime.

If you only wanted to store a time, then you were really in trouble. Imagine wanting to set an alarm for a recurring time, every day. What type would you use? Your best bet was of course to use the “TimeSpan” type, but this was actually designed to store elapsed time, not a particular clock time. As an example, a TimeSpan can have a value of 25 hours because it’s a measure of time, not the actual time of day. Another problem was time wrapping. If something started at 11PM, and went for 2 hours. How would you know to “wrap” the end time around to be 1AM? This often meant using some sort of rinky dink DateTime object that now ignored the date portion, ugh.

And that’s why .NET 6 has introduced DateOnly and TimeOnly!

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Using DateOnly

Using DateOnly is actually pretty easy. I mean.. Check the following code out :

DateOnly date = DateOnly.MinValue;
Console.WriteLine(date); //Outputs 01/01/0001 (With no Time)

An important distinction to make is that a DateOnly object never has a Timezone component. After all, if your birthday is on the 10th of May, it’s the 10th of May no matter where in the world you are. While this might seem insignificant, there are things like DateTimeKind and DateTimeOffset dedicated to solving timezone problems, of which, when we are using DateOnly we are uninterested in.

The thing to note here is that even if you didn’t intend to deal with timezones previously, you might have been conned into it. For example :

DateTime date = DateTime.Now;
Console.WriteLine(date.Kind); //Outputs "Local"

The “Kind” of our DateTime object is “Local”. While this may seem insignificant, it could actually become very painful when converting the DateTime, or even serializing it.

All in all, DateOnly gives us a way to give a very precise meaning to a date, without being confused about timezones or time itself.

Using TimeOnly

While DateOnly had fairly simple examples, TimeOnly actually has some nifty features that suddenly make this all the more a needed feature.

But first, a simple example :

TimeOnly time = TimeOnly.MinValue;
Console.WriteLine(time); //Outputs 12:00 AM

Nice and simple right! What about if we use our example from above, and we check how time is wrapped. For example :

TimeOnly startTime = TimeOnly.Parse("11:00 PM");
var hoursWorked = 2;
var endTime = startTime.AddHours(hoursWorked);
Console.WriteLine(endTime); //Outputs 1:00 AM

As we can see, we no longer have to account for “overflows” like we would when using a TimeSpan object.

Another really cool feature is the “IsBetween” method on a TimeOnly object. As an example :

TimeOnly startTime = TimeOnly.Parse("11:00 PM");
var hoursWorked = 2;
var endTime = startTime.AddHours(hoursWorked);

var isBetween = TimeOnly.Parse("12:00 AM").IsBetween(startTime, endTime); //Returns true. 

This is actually very very significant. In a recent project I was working on, we needed to send emails only in a select few hours in the evening. To check whether the current time was between two other datetimes is actually significantly harder than you may think. What would often happen is that we would have to take the time, and assign it to an arbitrary date (For example, 01/01/0001), and then do the comparison. But then the midnight wrap around would always choke us in the end.

In my view DateOnly and TimeOnly are significant additions to .NET, and ones that I think will be used right from release. On top of this, it doesn’t mean DateTime will be a relic of the past, instead, DateTime can be used only when you want to point to a specific date and time past, present or future, and not one or the other.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.