With the announcement of .NET 5 last year, and subsequent announcements leading up to MSBuild 2020, a big question has been what’s going to happen to “.NET Standard”. That sort of framework that’s not an actual framework just an interface that various platforms/frameworks are required to implement, but not really, and then you have to get David Fowler to do a Github Gist that gets shared a million times to actually explain to people what the hell this thing is.

Anyway. .NET Standard is no more (Or will be eventually). As confusing as it may be at first to get rid of something that was only created 3 odd years ago… It does kinda make sense to get rid of it at this juncture.

Rewinding The “Why” We Even Had .NET Standard

Let’s take a step back and look at how and why .NET Standard came to be.

When .NET Core was first released, there was a conundrum. We have all these libraries that are already written for .NET Framework, do we really want to re-write them all for .NET Core? Given that the majority of early .NET Core was actually a port of .NET Framework to work cross platform, many of the classes and method signatures were identical (Infact I would go as far as to say most of them were).

Let’s use an example. Let’s say that I want to open a File inside my library using the standard File.ReadAllLines(string path) call. Now it just so happens if you write this code in .NET Framework, .NET Core or even Mono, it takes the same parameters (a string path variable), and returns the same thing, (a string array). Now *how* these calls read a file is up to the individual platform (For example .NET Core and Mono may have some special code to handle Mac path files), but the result should always be the same, a string array of lines from the file.

So if I had a library that does nothing but open a file to read lines and return it. Should I really need to release that library multiple times for different frameworks? Well, that’s where .NET Standard comes in. The simplest way to think about it is it defines a list of classes and methods that every platform agrees to implement. So if File.ReadAllLines() is part of the standard, then I can be assured that my library can be released once as a .NET Standard library, and it will work on multiple platforms.

If you’re looking for a longer explanation about .NET Standard, then there’s an article I wrote over 3 years ago that is still relevant today : https://dotnetcoretutorials.com/2017/01/13/net-standard-vs-net-core-whats-difference/

TL;DR; .NET Standard provided a way for different .NET Platforms to share a set of common method signatures that afforded library creators to write code once and be able to run on multiple platforms. 

.NET Standard Is No Longer Needed

So we come to the present day where announcements are coming out that .NET Standard is no longer relevant (sort of). And there’s two main reasons for that….

.NET Core Functionality Surpassed .NET Framework – Meaning New .NET Standard Versions Were Hard To Come By

Initially, .NET Core was a subset of .NET Framework functionality. So the .NET Standard was a way almost of saying, if you wrote a library for .NET Framework, here’s how you know it will work out of the box for .NET Core. Yes, .NET Standard was also used as a way to see functionality across other platforms like Mono, Xamarin, Silverlight, and even Windows Phone. But I feel like the majority of use cases were for .NET Framework => .NET Core comparisons.

As .NET Core built up it’s functionality, it was still essentially trying to reach feature parity with .NET Framework. So as a new version of .NET Core got released each year, a new version of .NET Standard also got released with it that was, again, almost exclusively to look at the common method signatures across .NET Framework <=> .NET Core. So eventually .NET Core surpasses .NET Framework, or at the very least says “We aren’t porting anything extra over”. This point is essentially .NET Standard 2.0.

But obviously work on .NET Core doesn’t stop, and new features are added to .NET Core that don’t exist in .NET Framework. But .NET Framework updates at first are few and far between,  until it’s announced that essentially it’s maintenance mode only (Or some variation there-of). So with the new features being added to .NET Core, do they make sense to be added to a new version of standard given that .NET Framework will never actually implement that standard? Kind of.. .Or atleast they tried. .NET Standard 2.1 was the latest release of the standard and (supposedly, although some would disagree), is implemented by both Mono and Xamarin, but not .NET Framework.

So now we have a standard that was devised to describe the parity between two big platforms, that one platform is no longer going to be participating in. I mean I guess we can keep implementing new standards but if there is only one big player actually adhering to that standard (And infact, probably defining it), then it’s kinda moot.

The Merger Of .NET Platforms Makes A Standard Double Moot

But then of course we rolled around 6 months after the release of .NET Standard 2.1,  and find the news that .NET Framework and .NET Core are being rolled into this single .NET platform called .NET 5. Now we are doubly not needing a standard because the two platforms we were trying to define the parity are actually just going to become one and the same.

Now take that, and add in the fact that .NET 6 is going to include the rolling in of the Xamarin platform. Now all those charts you saw of .NET Standard where you tried to trace your finger along the columns to check which version you should support are moot because there’s only one row now, that of .NET 6.

In the future there is only one .NET platform. There is no Xamarin, no .NET Core, no Mono, no .NET Framework. Just .NET.

So I Should Stop Using .NET Standard?

This was something that got asked of me recently. If it’s all becoming one platform, do we just start writing libraries for .NET 5 going forward then? The answer is no. .NET Standard will still exist as a way to write libraries that run in .NET Framework or older versions of .NET Core. Even today, when picking a .NET Standard version for a library, you try and pick the lowest number you can feasibly go to ensure you support as many platforms as you can. That won’t change going forward – .NET 5 still implements .NET Standard 1.0 for example, so any library that is targeting an older standard still runs on the latest version of the .NET platform.

What will change for the better are those hideously complex charts and nuget dependency descriptions on what platforms can run a particular library/package. In a few years from now it won’t be “Oh this library is for .NET Standard 2.1, Is that for .NET Core 2.1? No, it’s for .NET Core 3+… Who could have known”. Instead it will be, oh this library is for .NET 5, then it will work in .NET 7 no problems.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

So by now you’ve probably heard a little about various .NET 5 announcements and you’re ready to give it a try! I thought first I would give some of the cliffnotes from the .NET 5 announcement, and then jump into how to actually have a play with the preview version of .NET 5 (Correct as of writing this post on 2020-05-22).

Cliff Notes

  • .NET Standard is no more! Because .NET Framework and .NET Core are being merged, there is less of a need for .NET Standard. .NET Standard also covers things like Xamarin but that’s being rolled into .NET 6 (More on that a little later), so again, no need for it.
  • .NET Core coincides with C#9 and F#5 releases (As it typically does), but Powershell will now also be released on the same cadence.
  • They have added a visual designer for building WinForm applications since you could theoretically build WinForm applications in .NET Core 3.X but not design them with quite as much functionality as you typically would.
  • .NET 5 now runs on Windows ARM64.
  • While the concept of Single File Publish already exists in .NET Core 3.X, it looks like there has been improvements where it’s actually a true exe instead of a self extracting ZIP. Mostly for reasons around being on read-only media (e.g. A locked down user may not be able to extract that single exe to their temp folder etc).
  • More features have been added to System.Text.Json for feature parity with Newtonsoft.Json.
  • As mentioned earlier, Xamarin will be integrated with .NET 6 so that there would be a single unifying framework. Also looks like Microsoft will be doing a big push around the .NET ecosystem as a way to build apps once (in Xamarin), and deploy to Windows, Mac, IOS, Android etc. Not sure how likely this actually is but it looks like it’s the end goal.

Setting Up The .NET 5 SDK

So as always, the first thing you need to do is head to the .NET SDK Download page here : https://dotnet.microsoft.com/download/dotnet/5.0. Note that if you go to the actual regular download page of https://dotnet.microsoft.com/download you are only given the option to download .NET Core 3.1 or .NET Framework 4.8 (But there is a tiny little banner above saying where you can download the preview).

Anyway, download the .NET 5 SDK installer for your OS.

After installing, you can run the dotnet info command from a command prompt :

dotnet --info

Make sure that you do have the SDK installed correctly. If you don’t see .NET 5 in the list, the most common reason I’ve found is people installing the X86 version on their X64 PC. So make sure you get the correct installer!

Now if you use VS Code, you are all set. For any existing project you have that you want to test out running in .NET 5 (For example a small console app), then all you need to do is open the .csproj file and change :

<PropertyGroup>
  <OutputType>Exe</OutputType>
  <TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>

To :

<PropertyGroup>
  <OutputType>Exe</OutputType>
  <TargetFramework>net5.0</TargetFramework>
</PropertyGroup>

As noted in the cliffnotes above, because there is really only one Framework with no standard going forward, they ditched the whole “netcoreapp” thing and just went with “net”. That means if you want to update any of your .NET Standard libraries, you actually need them to target “net5.0” as well. But hold fire because there is actually no reason to bump the version of a library unless you really need something in .NET 5 (Pretty unlikely!).

.NET 5 In Visual Studio

Now if you’ve updated your .NET Core 3.1 app to .NET 5 and try and build in Visual Studio, you may just get :

The reference assemblies for .NETFramework,Version=v5.0 were not found. 

Not great! But all we need to do is update to the latest version and away we go. It’s somewhat lucky that there isn’t a Visual Studio release this year (e.g. There is no Visual Studio 2020), otherwise we would have to download yet another version of VS. So to update, inside Visual Studio, simply go Help -> Check For Updates. The version you want to atleast be on is 16.6 which as of right now, is the latest non-preview version.

Now after installing this update for the first time, for the life of me I couldn’t work out why I could build an existing .NET 5 project, but when I went to create a new project, I didn’t have the option of creating it as .NET 5.

As it turns out, by default the non-preview version of Visual Studio can only see non-preview versions of the SDK. I guess so that you can keep the preview stuff all together. If you are like me and just want to start playing without having to install the Preview version of VS, then you need to go Tools -> Options inside Visual Studio. Then inside the options window under Environment there is an option for “Preview Features”.

Tick this. Restart Visual Studio. And you are away laughing!

Do note that some templates such as Console Applications don’t actually prompt you for the SDK version when creating a new project, they just use the latest SDK available. In this case, your “default” for Visual Studio suddenly becomes a preview .NET Core SDK. Perfectly fine if you’re ready to sit on the edge, but just something to note in case this is a work machine or similar.

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

It’s here! .NET Core 3.0 has been released! You can read the full release post from Microsoft here : https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0/

Since early preview versions of .NET Core 3, I’ve been blogging about various features that have been making their way in. So here’s a quick run down on what you can finally use in the “current” release of .NET Core.

Nullable reference types support – Allows you to specify reference types not be nullable by default
Default interface implementations support – Allows you to write an interface implementation in the interface itself
Async streams/IAsyncEnumerable support – True async list streaming
.NET Core now supports desktop development – Take your Winform/WPF apps and build them on .NET Core
Index structs – More declarative approach to reference an index in an array
Range type – Better syntax for “splicing” an array
Switch expressions – Cleaner syntax for switch arrays that return a result
Single EXE Builds – Package an entire application into a single exe file
IL Linker Support – Trim unneeded packages from self contained deploys

And of course things I haven’t blogged about but are still amazing features include the new JSONWriter/Reader, better Docker support, a new SQLClient, HTTP/2 Support, and of course a heap of performance improvements. I’ll say it again, head over to the official release post and have a look : https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

.NET Core 3.0 Preview 7 has been released as of July 23rd. You can grab it here : https://dotnet.microsoft.com/download/dotnet-core/3.0

What’s Included

The official post from Microsoft is here : https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0-preview-7/ but you’re not going to find anything shiny and new to play with. Infact the post explicitly states “The .NET Core 3.0 release is coming close to completion, and the team is solely focused on stability and reliability now that we’re no longer building new features“. So if something hasn’t made it into .NET Core 3.0 by now, you might be waiting until 3.1.

But that doesn’t mean we can’t take a look at what’s been happening under the hood. So I took a look at the Github issue list to see what did make it to this release, even if it wasn’t worth a shout out in the official blog post. In particular, there seems to be two main areas that has garnered quite a few taggings.

JsonSerializer Bugs

Unsurprisingly with .NET Core moving away from JSON.NET and instead moving to use their own JSON Serializer, there has been bugs a plenty. At a glance, it mostly looks like edgecase stuff where in particular scenarios, serializing and deserializing is not working as expected. Things like JSON Should be able to handle scientific notation.

Blazor Bugs

Blazor has been getting a major push recently – I’m seeing it show up more and more in my twitter feed. And along with people using it for realworld work loads, comes people logging bugs. To be brutally honest, I haven’t used Blazor at all. It feels like it’s trying to solve a problem that doesn’t exist somewhat like the early days of ASP.NET WebForms. (And feel free to drop a comment below about how I’m totally wrong on that!).

Worth Updating?

One thing worth mentioning is that Microsoft say “NET Core 3.0 Preview 7 is supported by Microsoft and can be used in production“. So they obviously feel confident enough that this is a proper Release Candidate. But in terms of getting this because there is something new in there that you definitely need to get a hold of, it’s probably worth skipping for now.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Dec 14, 2016.

That’s the date I registered dotnetcoretutorials.com. A week later I would register dotnetcoretutorial.com because I was worried someone would just chop off the “s” from the end and copy me.

Almost 3 years later and I have seen some crazy things with this platform. The “poopooing” of the project.json format, the deprecation then un-deprecation of ASP.NET Core running on full framework, and now the introduction of desktop apps to run on .NET Core.

Through all of that, I get a steady stream of emails asking for help on projects. It goes from individuals asking me to take a squiz at their github projects and point out glaring mistakes, all the way to large corporates getting me to build a migration plan on how they can get their existing .NET Framework project onto .NET Core. So that’s why I’m…

“Opening up the consultancy”

That’s probably a bit of an over the top statement. But it’s more like I’ve updated a couple of pages on the website to make it clear that there is no harm in reaching out and seeing how we can work together. I try and reply to each and every email even if I can’t be of too much help at the time.

So go ahead, contact me and get your .NET Core project moving today!

A Word On Product Posts

While not strictly related to consulting. I did want to mention “product posts” that sometimes pop up here from time to time. These might be posts on hosting platforms, dev tooling, monitoring solutions, or any other “paid” product used in software development.

I at times get emails from companies looking to promote their new tool, and they want me to do a nice write up on how amazing it is. But…

  • If I personally would not use the product, I do not write about it.
  • If I think the product is overpriced, I do not write about it.
  • If I feel that there is a snake oil element to the product (misleading landing pages, promises etc), I do not write about it.

Very few products make it past this line, and therefore ones that I do write about are ones that I actually use myself. This does mean that the majority of requests I get to write posts do not actually make it onto the blog, but I think it’s important that readers trust that when I write about the product, it’s worth their money.

So with all that being said, I just want to make it clear that yes, sometimes companies contact me asking to write about their product. But if I do write about it, it’s my own words with my own personal experience with the tool.

With that out of the way, if you have a product used in software development you think would benefit readers, you can contact me here.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

.NET Core 3.0 Preview 6 has been released as of June 12th. You can grab it here : https://dotnet.microsoft.com/download/dotnet-core/3.0

What’s Included

The official blog post from Microsoft is here : https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0-preview-6/ . But of course if you want the cliff notes then…

Docker Images For ARM64

Previously docker images for .NET Core were only available for X64 systems, however you can now grab ’em for ARM64 too!

Ready To Run Images

Probably the most interesting thing contained in this release is the “Ready To Run” compilation ability. If you’ve ever used NGEN in previous versions of the .NET Framework, or attempted to some ahead of time jitting to improve your applications startup performance, then this is for you.

Better Assembly Linking

It’s always been possible to publish self contained applications in .NET Core that don’t require the runtime to be present on the destination machine. Unfortunately this sometimes made self contained apps (even one’s that did nothing but print Hello World!) become massive. Preview 6 contains a way to trim the required assemblies to reduce the footprint by almost half.

HTTP/2 Support In HTTPClient

HTTP/2 support has reached .NET Core! The default is still HTTP/1.1, but you are now able to opt in to create connections using a specific version. HTTP/2 opens up the world of multiplexed streams, header compression and better pipelining.

Worth Updating?

Probably the biggest thing for most developers in this version is HTTP/2 support. It’s likely something that will become defacto in the future so if you want to get ahead of the curve, get stuck in!

Ready To Run images are also pretty huge. If you remember that famous blog post from the Bing team around performance improvements they gained from switching to .NET Core 2.1 (https://devblogs.microsoft.com/dotnet/bing-com-runs-on-net-core-2-1/), a big factor was ahead of time compiling of the code. So it’s definitely going to change the game for a lot of companies running .NET Core code.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’m calling it now, 2019 is going to be the year of action! .NET Core Tutorials has been running strong for over 2 years now (We were there right from the project.json days). Back then, people were fine reading all about the latest features of .NET Core. But these days, people want to be shown. Live! That’s why I’m starting live coding sessions on Twitch.tv every Wednesday and Sunday. I’ll be covering a variety of topics, but you’ll often catch me doing live blog writing sessions where you can get a sneak peak of content before it goes live and ask any questions while I’m doing live coding. We’ve already had some great questions in chat while doing our post on C# 8 IAsyncEnumerable.

So get amongst it. Head on over, drop a follow, and tune in next time!

But enough about me, I want it to be a year of action for you too. That’s why I’m giving away a copy of *any* of Manning’s classic “In Action” books. You know, those ones with the ye olde covers. Like this ASP.net Core In Action book by Andrew Lock (Who is a great author by the way!).

I bold the word *any* above because it’s not just ASP.net Core In Action, but actually any “In Action” book you want. Want to learn Redux in 2019? Try Redux In Action. Want to learn all about Amazon Web Services? Try AWS In Action. How about learning a bit of data science in R? Well… You guessed it, try R In Action. Infact just head over to Amazon and check out any of the In Action titles and start drooling over which one you want if you win!

Use any of the entry options below (or multiple), and you are in to win! It’s that easy!

Enter Now

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

At Microsoft Build today, it was announced that a Windows Desktop “pack” or “addon” would be released for .NET Core. It’s important to note that this is a sort of bolt on to .NET Core and actually won’t be part of .NET Core itself. It’s also important to note that this will not make Desktop Applications cross platform. It’s intended that the desktop apps built on top of .NET Core are still Windows only as they have always been (This is usually due to the various drawing libraries of the operating systems).

So you may ask yourself what’s the point? Well..

  • .NET Core has made huge performance improvements for everyday structs and classes within the framework. For example Dictionaries, Enums and Boxing operations are all now much faster on .NET Core 2.1
  • .NET Core comes with it’s own CLI and tooling improvements that you may prefer over the bloated .NET Framework style. For example a much cleaner .csproj experience.
  • Easy to test different .NET Core runtimes on a single machine due to how .NET Core allows multiple runtimes on a single machine.
  • You can bundle .NET Core with your desktop application so the target machine doesn’t require a runtime already. You can bundle .NET Framework with desktop applications, but it basically just does a quick install beforehand.

I think most of all is going to be the speed of .NET Core releases. At this point .NET Core is creating releases at a breakneck speed while the next minor release of the .NET Framework (4.7.2 -> 4.8) is expected to ship in about 12 months. That’s a very slow release schedule compared to Core. While Core doesn’t have too many additional features that .NET Framework doesn’t have, it likely will start drifting apart in feature parity before too long. That’s slightly a taboo subject at times, and it’s actually come up before when Microsoft wanted to discontinue support for running ASP.net Core applications on full framework. Microsoft did cave to pressure that time around, but it’s simply undeniable that Core is moving at a faster pace than the full Framework right now.

You can read the official announcement on the MSDN Blog here : https://blogs.msdn.microsoft.com/dotnet/2018/05/07/net-core-3-and-support-for-windows-desktop-applications/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

There is a current proposal that’s getting traction (By some) to make it into C# 8. That’s “default interface methods”. Because at the time of writing, C# 8 is not out, nor has this new language feature even “definitely” made it into that release, this will be more about the proposal and my two cents on it rather than any specific tutorial on using them.

What Are Default Interface Methods?

Before I start, it’s probably worth reading the summary of the proposal on Github here : https://github.com/dotnet/csharplang/issues/288. It’s important to remember is that this is just a proposal so not everything described will actually be implemented (Or implemented in the way it’s initially being described). Take everything with a grain of salt.

The general idea is that an interface can provide a method body. So :

interface IMyService
{
    string HelloWorld() { return "Hello World"; }
}

A class that implements this interface does not have to implement methods where a body has been provided. So for example :

class MyService : IMyService
{
    // Absolutely no body required.
}
....

IMyService myService = new MyService();
var helloWorld = myService.HelloWorld(); //Returns the string "Hello World"

The interesting thing is that the class itself does not have the ability to run methods that have been defined and implemented on an interface. So :

MyService myService = new MyService();
var helloWorld = myService.HelloWorld(); //Error
....
//But if we cast to the interface we are good to go
var helloWorld2 = ((IMyService)myService).HelloWorld();

The general idea is that this will now allow you to do some sort of multiple inheritance of behaviour/methods (Which was previously unavailable in C#).

interface IMySecondInterface
{
    string HelloWorld { return "Hello World 2";}
}

class MyService : IMyService, IMySecondInterface
{
}
.....
var myService = new MyService();
var helloWorld = ((IMyService)myService).HelloWorld(); // Returns Hello World
helloWorld = ((IMySecondInterface)myService).HelloWorld(); // Return Hello World 2

There are a few other things that then are required (Or need to become viable) when opening this up. For example allowing private level methods with a body inside an interface (To share code between default implementations).

Abstract Classes vs Default Interface Methods

While it does start to blur the lines a bit, there are still some pretty solid differences. The best quote I heard about it was :

Interfaces define behaviour, classes define state.

And that does make some sense. Interfaces still can’t define a constructor, so if you want to share constructor logic, you will need to use an abstract/base class. An interface also cannot define class level variables/fields.

Classes also have the ability to define accessibility of it’s members (For example making a method protected), whereas with an interface everything is public. Although part of the proposal is extending interfaces with things like the static keyword and protected, internal etc (I really don’t agree with this).

Because the methods themselves are only available when you cast back to the interface, I can’t really see it being a drop in replacement for abstract classes (yet), but it does blur the lines just enough to ask the question.

My Two Cents

This feels like one of those things that just “feels” wrong. And that’s always a hard place to start because it’s never going to be black and white. I feel like interfaces are a very “simple” concept in C# and this complicates things in ways which I don’t see a huge benefit. It reminds me a bit of the proposal of “Primary Constructors” that was going to make it into C# 6 (See more here : http://www.alteridem.net/2014/09/08/c-6-0-primary-constructors/). Thankfully that got dumped but it was bolting on a feature that I’m not sure anyone was really clamoring for.

But then again, there are some merits to the conversation. One “problem” with interfaces is that you have to implement every single member. This can at times lock down any ability to extend the interface because you will immediately blow up any class that has already inherited that interface (Or it means your class becomes littered with throw new NotImplementedException() ).

There’s even times when you implement an interface for the first time, and you have to pump out 20 or so method implementations that are almost boilerplate in content. A good example given in the proposal is that of IEnumerable. Each time you implement this you are required to get RSI on your fingers by implementing every detail. Where if there was default implementations, there might be only a couple of methods that you truly want to make your own, and the default implementations do the job just fine for everything else.

All in all, I feel like the proposal should be broken down and simplified. It’s almost like an American political bill in that there seems to be a lot in there that’s causing a bit of noise (allowing static, protected members on interfaces etc). It needs to be simplified down and just keep the conversation on method bodies in interfaces and see how it works out because it’s probably a conversation worth having.

What do you think? Drop a comment below.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently been playing around with all of the new features packaged into C# 7.2. One such feature that piqued my interest because of it’s simplicity was the “in” keyword. It’s one of those things that you can get away with never using in your day to day work, but makes complete sense when looking at language design from a high level.

In simple terms, the in  keyword specifies that you are passing a parameter by reference, but also that you will not modify the value inside a method. For example :

public static int Add(in int number1, in int number2)
{
	return number1 + number2;
}

In this example we pass in two parameters essentially by reference, but we also specify that they will not be modified within the method. If we try and do something like the following :

public static int Add(in int number1, in int number2)
{
	number1 = 5;
	return number1 + number2;
}

We get a compile time error:

Cannot assign to variable 'in int' because it is a readonly variable

Accessing C# 7.2 Features

I think I should just quickly note how to actually access C# 7.2 features as they may not be immediately available on your machine. The first step is to ensure that Visual Studio is up to date on your machine. If you are able to compile C# 7.2 code, but intellisense is acting up and not recognizing new features, 99% of the time you just need to update Visual Studio.

Once updated, inside the csproj of your project, add in the following :

<PropertyGroup>
	<LangVersion>7.2</LangVersion>
</PropertyGroup>

And you are done!

Performance

When passing value types into a method, normally this would be copied to a new memory location and you would have a clone of the value passed into a method. When using the in  keyword, you will be passing the same reference into a method instead of having to create a copy. While the performance benefit may be small in simple business applications, in a tight loop this could easily add up.

But just how much performance gain are we going to see? I could take some really smart people’s word for it, or I could do a little bit of benchmarking. I’m going to use BenchmarkDotNet (Guide here) to compare performance when passing a value type into a method normally, or as an in  parameter.

The benchmarking code is :

public struct Input
{
	public decimal Number1;
	public decimal Number2;
}

public class InBenchmarking
{
	const int loops = 50000000;
	Input inputInstance;

	public InBenchmarking()
	{
		inputInstance = new Input
		{
		};
	}

	[Benchmark]
	public decimal DoSomethingInLoop()
	{
		decimal result = 0M;
		for (int i = 0; i < loops; i++)
		{
			result = DoSomethingIn(in inputInstance);
		}
		return result;
	}

	[Benchmark(Baseline = true)]
	public decimal DoSomethingLoop()
	{
		decimal result = 0M;
		for (int i = 0; i < loops; i++)
		{
			result = DoSomething(inputInstance);
		}
		return result;
	}

	public decimal DoSomething(Input input)
	{
		return input.Number1;
	}

	public decimal DoSomethingIn(in Input input)
	{
		return input.Number1;
	}

	public decimal DoSomethingRef(ref Input input)
	{
		return input.Number1;
	}
}

And the results :

            Method |     Mean |     Error |    StdDev | Scaled | ScaledSD |
------------------ |---------:|----------:|----------:|-------:|---------:|
 DoSomethingInLoop | 20.89 ms | 0.4177 ms | 0.7845 ms |   0.34 |     0.02 |
   DoSomethingLoop | 62.06 ms | 1.5826 ms | 2.6003 ms |   1.00 |     0.00 |

We can definitely see the speed difference here. This makes sense because really all we are doing is passing in a variable and doing nothing else. It has to be said that I can’t see the in  keyword being used to optimize code in everyday business applications, but there is definitely something there for time critical applications such as large scale number crunchers.

Explicit In Design

While the performance benefits are OK something else that comes to mind is that when you use in  is that you are being explicit in your design. By that I mean that you are laying out exactly how you intend the application to function. “If I pass this a variable into this method, I don’t expect the variable to change”. I can see this being a bigger benefit to large business applications than any small performance gain.

A way to look at it is how we use things like private  and readonly . Our code will generally work if we just make everything public  and move on, but it’s not seen as “good” programming habits. We use things like readonly  to explicitly say how we expect things to run (We don’t expect this variable to be modified outside of the constructor etc). And I can definitely see in  being used in a similar sort of way.

Comparisons To “ref” (and “out”)

A comparison could be made to the ref keyword in C# (And possibly to a lesser extend the out keyword). The main differences are :

in  – Passes a variable in to a method by reference. Cannot be set inside the method.
ref  – Passes a variable into a method by reference. Can be set/changed inside the method.
out  – Only used for output from a method. Can (and must) be set inside the method.

So it certainly looks like the ref keyword is almost the same as in, except that it allows a variable to change it’s value. But to check that, let’s run our performance test from earlier but instead add in a ref scenario.

             Method |     Mean |     Error |    StdDev | Scaled | ScaledSD |
------------------- |---------:|----------:|----------:|-------:|---------:|
  DoSomethingInLoop | 23.26 ms | 0.6591 ms | 1.0643 ms |   0.61 |     0.04 |
 DoSomethingRefLoop | 21.10 ms | 0.3985 ms | 0.4092 ms |   0.41 |     0.02 |
    DoSomethingLoop | 51.36 ms | 1.0188 ms | 2.5372 ms |   1.00 |     0.00 |

So it’s close when it comes to performance benchmarking. However again, in  is still more explicit than ref because it tells the developer that while it’s allowing a variable to be passed in by reference, it’s not going to change the value at all.

Important Performance Notes For In

While writing the performance tests for this post, I kept running into instances where using in  gave absolutely no performance benefit whatsoever compared to passing by value. I was pulling my hair out trying to understand exactly what was going on.

It wasn’t until I took a step back and thought about how in  could work under the hood, coupled with a stackoverflow question or two later that I had the nut cracked. Consider the following code :

struct MyStruct
{
	public int MyValue { get; set; }

	public void UpdateMyValue(int value)
	{
		MyValue = value;
	}
}

class Program
{
	static void Main(string[] args)
	{
		MyStruct myStruct = new MyStruct();
		myStruct.UpdateMyValue(1);
		UpdateMyValue(myStruct);
		Console.WriteLine(myStruct.MyValue);
		Console.ReadLine();
	}

	static void UpdateMyValue(in MyStruct myStruct)
	{
		myStruct.UpdateMyValue(5);
	}
}

What will the output be? If you guessed 1. You would be correct! So what’s going on here? We definitely told our struct to set it’s value to 1, then we passed it by reference via the in keyword to another method, and that told the struct to update it’s value to 5. And everything compiled and ran happily so we should be good right? We should see the output as 5 surely?

The problem is, C# has no way of knowing when it calls a method (or a getter) on a struct whether that will also modify the values/state of it. What it instead does is create what’s called a “defensive copy”. When you run a method/getter on a struct, it creates a clone of the struct that was passed in and runs the method instead on the clone. This means that the original copy stays exactly the same as it was passed in, and the caller can still count on the value it passed in not being modified.

Now where this creates a bit of a jam is that if we are cloning the struct (Even as a defensive copy), then the performance gain is lost. We still get the design time niceness of knowing that what we pass in won’t be modified, but at the end of the day we may aswell passed in by value when it comes to performance. You’ll see in my tests, I used plain old variables to avoid this issue. If you are not using structs at all and instead using plain value types, you avoid this issue altogether.

To me I think this could crop up in the future as a bit of a problem. A method may inadvertently run a method that modifies the structs state, and now it’s running off “different” data than what the caller is expecting. I also question how this will work in multi-threaded scenarios. What if the caller goes away and modified the struct, expecting the method to get the updated value, but it’s created a defensive clone? Plenty to ponder (And code out to test in the future).

Summary

So will there be a huge awakening of using the in keyword in C#? I’m not sure. When Expression-bodied Members (EBD) came along I didn’t think too much of them but they are used in almost every piece of C# code these days. But unlike EBD, the in  keyword doesn’t save typing or having to type cruft, so it could just be resigned to one of those “best practices” lists. Most of all I’m interested in how in  transforms over future versions of C# (Because I really think it has to).   What do you think?

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.