Getting Setup With C# 9

If you aren’t sure you are using C# 9 and/or you want to start using some of the new shiny features in the C# language, be sure to read our quick guide on getting setup with C# 9 and .NET 5. Any feature written about here is available in the latest C# 9 preview and is not a “theoretical” feature, it’s ready to go!

C# 9 Features

We are slowly working our way through all new C# 9 features, if you are interested in other new additions to the language, check out some of the posts below.

What We Have Currently

So before we jump into C# 9 and Init Only Properties, I thought let’s take a quick look on the problem this feature is actually trying to solve. In some classes, typically “model” classes, we want to make properties publicly readable, but not be able to be set outside our class. So something like this is pretty common :

public class Person
{
    public string Name { get; private set; }
}

We can now set the property of Name anywhere inside the class, for example in a constructor or from a method pretty easily :

public class Person
{
    public Person(string name)
    {
        this.Name = name;
    }

    public void SetName(string name)
    {
        this.Name = name;
    }

    public string Name { get; private set; }
}

But what’s harder now is that we can’t use an Object Initializer instead of a constructor, even if we are “newing” up the object for the first time, for example :

var person = new Person
{
    Name = "Jane Doe" // Compile Error 
};

No bueno! We get the following error :

The property or indexer 'Person.Name' cannot be used in this context because the set accessor is inaccessible

It’s not the end of the world but pretty annoying none the less.

But there’s actually another bigger issue. That is that in some cases, we may want a property to be set inside a constructor *or* object initialization and then not be changed after being created. Essentially, an immutable property.

There’s actually nothing that stops us from modifying the property after creation from inside the class. For example a private method can be called to edit the property just fine like so :

public class Person
{
    public void SetName(string name)
    {
        this.Name = name;
    }

    public string Name { get; private set; }
}

And that’s the larger problem here. I think for many years C# developers have used “private set;” as a way to half achieve immutability by only setting that property within a constructor or similar, and that sort of works for libraries where you don’t have access to the actual source code, but there’s nothing actually signalling the intent of the original developer that this property should *only* be set on object creation and never again.

Introducing Init-Only Properties

Let’s take our Person class and modify it like so :

public class Person
{
    public string Name { get; init; }
}

Notice how we change our “set” to “init”. Pretty easy, now let’s look at how that might affect code :

var person = new Person();
person.Name = "Jane Doe"; // Compile Time Error

Immediately the compiler throws this error :

Init-only property or indexer 'Person.Name' can only be assigned in an object initializer, or on 'this' or 'base' in an instance constructor or an 'init' accessor

OK so far, pretty similar to the “private set” we were using before. But notice that object initializers now do work :

var person = new Person
{
    Name = "Jane Doe" // Works just fine!
};

How about in the constructor?

public class Person
{
    public Person(string name)
    {
        this.Name = name;
    }

    public string Name { get; init; }
}

Also compiles fine! And what about in a method on a class :

public class Person
{
    public void SetName(string name)
    {
        this.Name = name; // Compile Error
    }

    public string Name { get; init; }
}

Boom! Everything blows up. So we’ve got that immutability we’ve always been after with private set, but it’s actually enforced this time!

Init is a relatively small change on the face of it, it’s just an extra keyword for properties, but it does make complete sense why it’s been added and I’m sure will be a welcome change for all C# developers.

Comparison With Readonly Variables

So I know someone on Twitter will definitely shoot me down for saying that we “finally” have immutable properties when we’ve had the “readonly” keyword for quite some time now. But there’s some differences that really make the init change a huge quality of life addition.

So this does create an “immutable property” :

public class Person
{
    public Person(string name)
    {
        _name = name;
    }

    private readonly string _name;
    public string Name => _name;
}

However you must now use the constructor, you cannot do this using object initializers :

var person = new Person
{
    Name = "Jane Doe"
};

Also the fact that you have to create a backing variable for the property is rather annoying so… I still love the init addition.

Side note. Someone pointed out in the comments that you can create a hidden backing field just by adding a single get accessor :

public class Person
{
    public string Name { get; } 
    
    Person(string name) 
        => Name = name;
} 

So this also works and is pretty close to immutability, but again, no object initializer (And no really signally the intent I think). But shout out to Antão!

I should also note that init properties actually work well with readonly variables if you actually do need it since init is only run at object construction which fits the readonly paradigm just fine. So as an example, this compiles and runs fine :

public class Person
{
    private readonly string _name;

    public string Name
    {
        get => _name;
        init => _name = value;
    }
}

Troubleshooting

If you get the following error :

The feature 'init-only setters' is currently in Preview and *unsupported*. To use Preview features, use the 'preview' language version.	

It means you are using a version of C# less than 9 (Or atleast not with the latest preview version). Check out our quick guide on getting setup with using C# 9 and all it’s features here : https://dotnetcoretutorials.com/2020/08/07/getting-setup-with-c-9-preview/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Getting Setup With C# 9

If you aren’t sure you are using C# 9 and/or you want to start using some of the new shiny features in the C# language, be sure to read our quick guide on getting setup with C# 9 and .NET 5. Any feature written about here is available in the latest C# 9 preview and is not a “theoretical” feature, it’s ready to go!

C# 9 Features

We are slowly working our way through all new C# 9 features, if you are interested in other new additions to the language, check out some of the posts below.

What We Currently Have

I remember reading the words “relational pattern matching” and kinda skipping over the feature because I wasn’t really sure what it was (And so, my developer brain said it can’t be that important!). But when I eventually found actual examples of it in action, I couldn’t believe that C# hadn’t had this feature yet.

So first, let’s mock up a problem in C# 8 that we should be able to solve with Relational Pattern Matching in C# 9 to actually be able to see the power.

Let’s take the following code :

int myValue = 1;

if (myValue <= 0)
    Console.WriteLine("Less than or equal to 0");
else if (myValue > 0 && myValue <= 10)
    Console.WriteLine("More than 0 but less than or equal to 10");
else
    Console.WriteLine("More than 10");

How could we improve it? Well, I would probably still use the If/Else jumble from above in most cases just for it’s simplicity, but if you wanted to, you could use the Pattern Matching introduced in C# 7 with a switch statement. That would look something like this :

switch(myValue)
{
    case int value when value <= 0:
        Console.WriteLine("Less than or equal to 0");
        break;
    case int value when value > 0 && value <= 10:
        Console.WriteLine("More than 0 but less than or equal to 10");
        break;
    default:
        Console.WriteLine("More than 10");
        break;
}

Not bad but we don’t really add anything to the If/Else statements, in fact we actually add even more complexity. If we are using C# 8, then we can use Switch Expressions (A great introduction here if you’ve never seen Switch Expressions before : https://dotnetcoretutorials.com/2019/06/25/switch-expressions-in-c-8/). That would instead look something like :

var message = myValue switch
{
    int value when value <= 0 => "Less than or equal to 0",
    int value when value > 0 && value <= 10 => "More than 0 but less than or equal to 10",
    _ => "More than 10"
};

Console.WriteLine(message);

This actually looks pretty good. But there is *so* much verbose stuff going on here that surely we can clean up right?

Adding The “is”/”and” Keywords

This actually isn’t specific to our example, but C# 9 has introduced the “is” and “and” keywords (We’re going to do an entire article on these in the future because there is actually some really cool stuff you can do). And these can actually help us here, we can now change this :

if (myValue > 0 && myValue <= 10)
    Console.WriteLine("More than 0 but less than or equal to 10");

To this :

if (myValue is > 0 and <= 10)
	Console.WriteLine("More than 0 but less than or equal to 10");

It’s just some sugar around saying “take this value and then I can do 1 to N statements on it all at once”. In long IF statements where you check multiple different options for a single variable/property, this is a huge help.

So obviously we can rewrite our If/Else statement from above to look like so :

if (myValue <= 0)
    Console.WriteLine("Less than or equal to 0");
else if (myValue is > 0 and <= 10)
    Console.WriteLine("More than 0 but less than or equal to 10");
else
    Console.WriteLine("More than 10");

And while that middle statement is a bit tidier, overall we are still writing quite a bit.

Relational Pattern Matching In Switch Statements/Expressions

Now we are cooking with gas. In C# 9, we can now turn this big case statement :

case int value when value > 0 && value <= 10:

Into this :

case > 0 and <= 10:

So we can do away with setting our variable for the pattern, and then add in our new “and” keyword to make things even tidier. That turns our entire switch statement into this :

switch(myValue)
{
    case <= 0:
        Console.WriteLine("Less than or equal to 0");
        break;
    case > 0 and <= 10:
        Console.WriteLine("More than 0 but less than or equal to 10");
        break;
    default:
        Console.WriteLine("More than 10");
        break;
}

Pretty tidy! But we can take it further with our switch expression :

var message = myValue switch
{
    <= 0 => "Less than or equal to 0",
    > 0 and <= 10 => "More than 0 but less than or equal to 10",
    _ => "More than 10"
};

Console.WriteLine(message);

Holy moly that’s tidy! And that’s relational pattern matching with C# 9!

Troubleshooting

If at any point you get the following errors :

The feature 'and pattern' is currently in Preview and *unsupported*. 
The feature 'relational pattern' is currently in Preview and *unsupported*. 

It means you are using a version of C# less than 9. Check out our quick guide on getting setup with using C# 9 and all it’s features here : https://dotnetcoretutorials.com/2020/08/07/getting-setup-with-c-9-preview/

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Getting Setup With C# 9

If you aren’t sure you are using C# 9 and/or you want to start using some of the new shiny features in the C# language, be sure to read our quick guide on getting setup with C# 9 and .NET 5. Any feature written about here is available in the latest C# 9 preview and is not a “theoretical” feature, it’s ready to go!

C# 9 Features

We are slowly working our way through all new C# 9 features, if you are interested in other new additions to the language, check out some of the posts below.

What Is Target Typing?

I’m going to be honest, before C# 9 where they talk about “Improved” Target Typing, I had never actually heard of the term before. But it’s actually very simple. It’s basically a way to say “given the context of what I’m doing, can we infer the type”. The use of the “var” keyword is an example of target typing. The use of var is actually a good example here because it’s actually almost the reverse of the improvements to target typing in C# 9, but let’s jump into those.

Target Typed New Expressions

Target Typed New Expressions is basically just a fancy way of saying that we don’t have to say the type after the new() expression…. That probably doesn’t make it anymore clearer, but in here’s a sample piece of code :

class Person
{
    public string FirstName { get; set; }
}

class MyClass
{
    void MyMethod()
    {
        Person person = new Person();
    }
}

Now in C# 9, you can do :

class Person
{
    public string FirstName { get; set; }
}

class MyClass
{
    void MyMethod()
    {
        Person person = new(); //<-- This!
    }
}

Since you’ve already defined the type for your variable, it can infer that when you call new() without a type, you’re trying to new up the exact type.

It even works with constructors with parameters!

class Person
{
    public Person(string firstName)
    {
        this.FirstName = firstName;
    }
    public string FirstName { get; set; }
}

class MyClass
{
    void MyMethod()
    {
        Person person = new("John");
    }
}

Unfortunately there is a really big caveat. Constructors (I feel anyway) have almost become the minority in my code, especially because of the heavy use of Dependency Injection in todays coding. So typically when I am newing up an object, I am setting properties at the same time like so :

Person person = new Person
{
    FirstName = "John"
};

But of course this doesn’t work :

Person person = new
{
    FirstName = "John"
};

Because writing new like that without a type, then straight into curly’s tells C# you are creating an “anonymous” object. Interestingly in the current C#9 Preview with Visual Studio, you can do the “double up” approach like so :

Person person = new()
{
    FirstName = "test"
};

Intellisense *does not* work right now when doing this (e.g. It won’t auto complete “FirstName” for me), but once it’s written, it will compile. I chuckle because at the moment if you tried this in C#8, it would gray out the () because they aren’t needed. And typically in a code review someone will add a nit that “Hey, you don’t need the parenthesis”, but now I guess there is a reason to have them!

Another huge caveat is the use of the “var” keyword. For obvious reasons, this doesn’t work :

var person = new();

As I mentioned earlier, this is almost like using var, but coming from the other side. If your code doesn’t use the var keyword that often, then this may be of use, but for me, I almost exclusively use var these days when newing up objects, so it’s not going to be a common tool in my arsenal.

That being said, cases where you cannot use var (Class level properties, returning new objects from a method etc), this fits perfectly.

Target Typing Conditional Operators

What we are about to talk about *should* work in C# 9, but under the current preview doesn’t. Apparently in a previous preview version it did but.. Right now it’s a bit busted. Frustrating! But we’ll still talk about it anyway because it’s under the umbrella of “Target Typing”.

Target typed conditional operators are basically the compiler looking for ways to make your null-coalescing operator (??) work. For example the following code :

class Program
{
    static void Main(string[] args)
    {
        Cat cat = null;
        Dog dog = new Dog();
        IAnimal animal = cat ?? dog;
    }
}

interface IAnimal
{

}

class Dog : IAnimal
{

}

class Cat : IAnimal
{

}

Notice how Cat and Dog both inherit from the same interface (IAnimal), and we are checking if Cat is null, then return Dog into a variable with a type of “IAnimal”. Right now in C# 8 (And.. C# 9 Preview 7), this doesn’t work and the compiler complains. But it makes total sense for this code to work because the developers intent is clear, and it doesn’t break any existing paradigms with the language.

For example doing this does work in all versions of C#:

Cat cat = null;
Dog dog = new Dog();
IAnimal animal = cat;
if (animal == null)
    animal = dog;

So it’s really just allowing the ?? operator to do this for us.

Another example not using classes would be something such as :

Dog dog = null;
int? myVariable = dog == null ? 0 : null;

This fails because there is no conversion between 0 and null. But we are casting it back to a nullable integer so clearly there is a common base here and the developers intent is pretty clear. In fact this does work if you do something such as :

Dog dog = null;
int? myVariable = dog == null ? 0 : default(int?);

Or even

Dog dog = null;
int? myVariable = dog == null ? (int?)0 : null;

So again, this is all about wrapping some sugar around this to do things we already do and make developers lives easier.

Again I must stress that the improved target typing for conditional operators are not quite in the preview yet, but should be there very soon.

Troubleshooting

Just a quick note for this feature. If you see :

The feature 'target-typed object creation' is currently in Preview and *unsupported*.

Then you *must* have the latest preview SDK installed and your csproj file updated to handle C# 9. More info here : https://dotnetcoretutorials.com/2020/08/07/getting-setup-with-c-9-preview/

 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

As .NET 5 is rolled out, so too is another version of C#, this time being C# 9. I’m actually pretty excited for some of the features that, fingers crossed, should make it into the final release. But as always, I don’t want to wait till the official release, and instead jump in early and play with all the new shiny things.

So in actual fact, getting setup with C# 9 is basically the same as getting setup with .NET Preview 5. And luckily for you, we have a shiny guide right here : https://dotnetcoretutorials.com/2020/05/22/getting-started-with-net-5-and-the-msbuild-net-5-cliffnotes/. But for the cliffnotes of that post :

  • Download and install the latest .NET 5 SDK from here : https://dotnet.microsoft.com/download/dotnet/5.0
  • Ensure that your version of Visual Studio 2019 is atleast 16.7 by clicking Help => Check For Updates inside Visual Studio. If in doubt, update.
  • Go Tools => Options inside Visual Studio, then select “Preview Features” and tick the box that says “Use previews of the .NET Core SDK”. Then restart Visual Studio.

Once the .NET 5 Preview SDK is installed and setup, then the only thing you need to do is edit your .csproj file and add a lang element like so :

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net5.0</TargetFramework>
    <LangVersion>9.0</LangVersion>
  </PropertyGroup>
</Project>

Troubleshooting

If you’ve done all of that and you get either of these errors :

The reference assemblies for .NETFramework,Version=v5.0 were not found. 

Or

Invalid option '9.0' for /langversion. Use '/langversion:?' to list supported values.

Then here’s your quick and easy troubleshooting list :

  • Are you sure you’ve got the latest .NET 5 SDK installed? Remember it’s the SDK, not the runtime, and it’s .NET 5.
  • Are you sure you’re Visual Studio is up to date? Even if you have the SDK installed, Visual Studio plays by it’s own rules and needs to be updated.
  • Are you using Visual Studio 2019? Any version lower will not work.
  • Are you sure you enabled Preview SDKs? Remember Tools => Options, then “Preview Feature”

Keeping Up To Date

A new preview version of .NET 5/C#9 comes out every few months. So if you’re reading about a new C# 9 or .NET 5 feature that someone is using but you can’t seem to get it to work, then always head back to https://dotnet.microsoft.com/download/dotnet/5.0 and download the latest preview version and install. Similar for Visual Studio, while typically less of an issue, try and keep it up to date as so often latest features not working are simply that I’m on a version that was perfectly fine a month ago, but is now out of date.

C# 9 Features

We are slowly working our way through all new C# 9 features, if you are interested in other new additions to the language, check out some of the posts below.

All of these will be covered in detail coming soon!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

If you’ve ever taken part in an AI challenge or contest over the years, you’ve probably had to work out a path finding algorithm along the way. I remember many moons ago, as part of the Google AI Challenge (ended a few years ago which is a real shame), I actually swapped my solution to use Java just so I could use an A* search algorithm that I found on the internet. C# can be a bit weird in that for business applications, you can find a million examples on how to talk to Sharepoint, but when it comes to AI, Machine Learning, or even just data structures and algorithms in general, it can be a bit bare. So I thought I would quickly whip up a post on a dead simple implementation of the A* path finding algorithm in C#.

What Is A* Search?

It probably makes sense to rewind a little bit and ask, what exactly is the A* algorithm? A* Search is a path finding algorithm. Or in simpler terms, given a map, starting at one location, what is the most efficient way of getting to a second location, walking around walls, obstacles and ignoring dead ends.

For example if we had a map that looked like so :

A           |
--| |-------|
            | 
   |-----|  | 
   |     |  |
---|     |B |

How do we get from point A to point B? Obviously we can’t walk through walls, but a human can easily trace a nice easy path with your eye, so why can’t a computer do it? Well it can using A*!

The main thing about using A* Search is that it’s cost based. That is that it tries to find the optimal path to take, even if that may take additional processing.

Wikipedia will obviously explain a heck of a lot better than me! https://en.wikipedia.org/wiki/A*_search_algorithm

Jumping into the Code

While I’ll walk through the code in this post, you can get the entire gist here : https://gist.github.com/DotNetCoreTutorials/08b0210616769e81034f53a6a420a6d9 which has a complete working example. But I do recommend following along in this post anyway as I try and explain it as we go so it may be a bit easier to digest.

The first thing we want to do is create a “Tile” object. This essentially represents a square on a grid.

class Tile
{
	public int X { get; set; }
	public int Y { get; set; }
	public int Cost { get; set; }
	public int Distance { get; set; }
	public int CostDistance => Cost + Distance;
	public Tile Parent { get; set; }

	//The distance is essentially the estimated distance, ignoring walls to our target. 
	//So how many tiles left and right, up and down, ignoring walls, to get there. 
	public void SetDistance(int targetX, int targetY)
	{
		this.Distance = Math.Abs(targetX - X) + Math.Abs(targetY - Y);
	}
}

To walk through the properties for this object :

  • X,Y are locations on a grid that we will use.
  • Cost is how many tiles we had to traverse to reach here. So for example if this is right next to the starting tile, it would be a cost of “1”. If it was two tiles to the right, it would be a cost of 2 etc.
  • Distance is the distance to our destination (e.g. the target tile). This is worked out using the SetDistance method where it’s basically, ignoring all walls, how many tiles left/right and up/down would it take to reach our goal.
  • CostDistance is essentially the Cost + the Distance. It’s useful later on because given a set of tiles, we work out which one to “work on” by ordering them by the CostDistance. e.g. How many tiles we’ve moved so far + how many tiles we think it will probably take to reach our goal. This is important!
  • Parent is just the tile we came from to get here.

Inside our main method (We are doing this inside a console app but you are welcome to edit this to your needs). We set up some basic code :

static void Main(string[] args)
{
	List<string> map = new List<string>
	{
		"A          ",
		"--| |------",
		"           ",
		"   |-----| ",
		"   |     | ",
		"---|     |B"
	};

	var start = new Tile();
	start.Y = map.FindIndex(x => x.Contains("A"));
	start.X = map[start.Y].IndexOf("A");

	var finish = new Tile();
	finish.Y = map.FindIndex(x => x.Contains("B"));
	finish.X = map[finish.Y].IndexOf("B");
	
	start.SetDistance(finish.X, finish.Y);

	var activeTiles = new List<Tile>();
	activeTiles.Add(start);
	var visitedTiles = new List<Tile>();
}
  • Set up the “map” which is essentially a grid. In our example we use a list of strings but this grid or map can be made up any way you like.
  • We record the “Start” tile, e.g. where we start,
  • We record the “End” tile, e.g. where we finish
  • And we create a list of “ActiveTiles” and “VisitedTiles”. We populate the “ActiveTiles” with our start block. These are tiles that we will essentially work on.

The next thing we need to do is a little helper method. What this does is that, given a particular tile (and the target tile), we want to get all the tiles around itself. It’s used to find the next set of tiles we can try and walk on. We actually don’t do many checks inside of here (For example, if it’s even optimal to walk to the tile beside us, we may be walking the wrong way!), but we do check whether it’s a valid tile to walk on.

The code looks like so :

private static List<Tile> GetWalkableTiles(List<string> map, Tile currentTile, Tile targetTile)
{
	var possibleTiles = new List<Tile>()
	{
		new Tile { X = currentTile.X, Y = currentTile.Y - 1, Parent = currentTile, Cost = currentTile.Cost + 1 },
		new Tile { X = currentTile.X, Y = currentTile.Y + 1, Parent = currentTile, Cost = currentTile.Cost + 1},
		new Tile { X = currentTile.X - 1, Y = currentTile.Y, Parent = currentTile, Cost = currentTile.Cost + 1 },
		new Tile { X = currentTile.X + 1, Y = currentTile.Y, Parent = currentTile, Cost = currentTile.Cost + 1 },
	};

	possibleTiles.ForEach(tile => tile.SetDistance(targetTile.X, targetTile.Y));

	var maxX = map.First().Length - 1;
	var maxY = map.Count - 1;

	return possibleTiles
			.Where(tile => tile.X >= 0 && tile.X <= maxX)
			.Where(tile => tile.Y >= 0 && tile.Y <= maxY)
			.Where(tile => map[tile.Y][tile.X] == ' ' || map[tile.Y][tile.X] == 'B')
			.ToList();
}

So we generate the tile above, below, left and right of us. We set the distance value to the goal on each tile. Then we check whether the tile is within the bounds of our map (e.g. X is not less than 0). Finally, we ensure that the tile we want to walk to is either empty (e.g. No wall), or it’s actually our destination.

Also note that we set the “Cost” of the tile to be always +1 of the parent. This makes sense as whatever the cost of the parent was, one more step is always going to cost… one more. Kinda seems dumb to point it out but it is important!

Now we are into the meat and bones of this. Heading back to our Main method (I’ve commented where the generation of the Map was before), we are going to now loop through our tiles and actually start walking through the map!

static void Main(string[] args)
{
	//This is where we created the map from our previous step etc. 

	while(activeTiles.Any())
	{
		var checkTile = activeTiles.OrderBy(x => x.CostDistance).First();

		if(checkTile.X == finish.X && checkTile.Y == finish.Y)
		{
			Console.Log(We are at the destination!);
			//We can actually loop through the parents of each tile to find our exact path which we will show shortly. 
			return;
		}

		visitedTiles.Add(checkTile);
		activeTiles.Remove(checkTile);

		var walkableTiles = GetWalkableTiles(map, checkTile, finish);

		foreach(var walkableTile in walkableTiles)
		{
			//We have already visited this tile so we don't need to do so again!
			if (visitedTiles.Any(x => x.X == walkableTile.X && x.Y == walkableTile.Y))
				continue;

			//It's already in the active list, but that's OK, maybe this new tile has a better value (e.g. We might zigzag earlier but this is now straighter). 
			if(activeTiles.Any(x => x.X == walkableTile.X && x.Y == walkableTile.Y))
			{
				var existingTile = activeTiles.First(x => x.X == walkableTile.X && x.Y == walkableTile.Y);
				if(existingTile.CostDistance > checkTile.CostDistance)
				{
					activeTiles.Remove(existingTile);
					activeTiles.Add(walkableTile);
				}
			}else
			{
				//We've never seen this tile before so add it to the list. 
				activeTiles.Add(walkableTile);
			}
		}
	}

	Console.WriteLine("No Path Found!");
}

So how does this work?

  • First we keep looping until we have no more “active” tiles, that is, there’s basically no where we haven’t walked. If we break this loop, it means we couldn’t find any way to reach our goal (e.g. walls in the way). This is important so we don’t end up in an infinite loop.
  • Next we take a tile off our list. Note that it’s not the first tile, nor the last tile. It’s the tile with the lowest current CostDistance. This ensures that we are always working on the most cost effective path at that very moment. It also ensures that should we come across a tile that is in our VisitedList, we can be sure that the cost of that tile in the VisitedList is going to be lower than whatever we currently have (Because the cost is only going to go up each loop!).
  • If the tile we pull out matches our finish tile, then boom! We are done! I’ll add some more code in here shortly to show how to print out our entire path (Or you can check the Gist!)
  • We remove our tile from the ActiveList, and add it to the VisitedList, as we are working on it now and anyone else that comes to this tile, should just know we’ve taken care of it.
  • Next we get all the tiles adjacent to the current tile using our GetWalkableTiles method. Then we loop through them.
  • If the walkable tile has already been visited in the past, then just ignore it.
  • If the walkable tile is in the active tiles list, then that’s cool! But check that what we have now is not actually a better way to get to the same tile. In most cases this is just because of a small zigzag instead of going directly there.
  • If we’ve never seen the tile before, then add it to the active tiles list.

And that’s it! We repeat this over and over and because we are pulling the tile with the lowest cost each time and walking from it, we can be sure that whenever we find a result, we don’t have to keep processing!

Now the code I used to output a nice way of looking at the path was like so :

if(checkTile.X == finish.X && checkTile.Y == finish.Y)
{
	//We found the destination and we can be sure (Because the the OrderBy above)
	//That it's the most low cost option. 
	var tile = checkTile;
	Console.WriteLine("Retracing steps backwards...");
	while(true)
	{
		Console.WriteLine($"{tile.X} : {tile.Y}");
		if(map[tile.Y][tile.X] == ' ')
		{
			var newMapRow = map[tile.Y].ToCharArray();
			newMapRow[tile.X] = '*';
			map[tile.Y] = new string(newMapRow);
		}
		tile = tile.Parent;
		if(tile == null)
		{
			Console.WriteLine("Map looks like :");
			map.ForEach(x => Console.WriteLine(x));
			Console.WriteLine("Done!");
			return;
		}
	}
}

Really all we are doing here is looping through each tile, and traversing to their parent. While doing so we are adding a * on the map to indicate we walked there, and also outputting the co-ordinates.

End Result

What does the end result output?

Retracing steps backwards...
10 : 5
10 : 4
10 : 3
10 : 2
9 : 2
8 : 2
7 : 2
6 : 2
5 : 2
4 : 2
3 : 2
3 : 1
3 : 0
2 : 0
1 : 0
0 : 0
Map looks like :
A***
--|*|------
   ********
   |-----|*
   |     |*
---|     |B
Done!

Pretty damn cool!

Again if you are struggling to follow along, you can grab the entire gist as a single file from here : https://gist.github.com/DotNetCoreTutorials/08b0210616769e81034f53a6a420a6d9 . Feel free to modify for your needs!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently got asked a pretty good question about EFCore (Although it does also apply in general to database concepts), and that was :

Should I use RowVersion or ConcurrencyToken for optimistic concurrency?

And the answer is “It depends” and even more specifically, “Do you know what the difference, or lack thereof, there is between the two?”

Let’s rewind a little bit and start with what exactly are Concurrency Tokens, then what is a RowVersion, then finally, how do they compare.

What Is A Concurrency Token?

A concurrency token is a value that will be “checked” every time an update occurs on a database record. By checked what I specifically mean is that the existing value will be used as part of a SQL WHERE statement, so that should it have changed from underneath you, the update will fail. This might occur if two users are trying to edit the same record at the same time.

Or in very crude lucidchart form :

When UserB is updating the same record as UserA, at worst he is overwriting details from UserA unwittingly, but even at best he is writing details to a record using knowledge from his read that may be outdated by the update from UserA.

A concurrency token fights this by simply checking that information contained in the original read, is the still there on the write. Let’s imagine that we have a database table called “User” that looks like so :

Id		int
FirstName	nvarchar(max)
LastName	nvarchar(max)
Version		int

Normally a SQL update statement without a concurrency token might look like so :

UPDATE User
SET FirstName = 'Wade'
WHERE Id = 1

But if we use the Version column as a concurrency token, it might instead look like :

UPDATE User
SET FirstName = 'Wade', Version = Version + 1
WHERE Id = 1 AND Version = 1

The Version value in our WHERE statement is the value we fetched when we read the data originally. This way, if someone has updated a record in the time it took us to read the data and then update it, the Version is not going to match and our Update statement will fail.

In Entity Framework/EF Core, we have two ways to say that a property is a ConcurrencyToken. If you prefer using DataAnnotations you can simply apply an attribute to your models.

[ConcurrencyCheck]
public int Version { get; set; }

Or if you prefer Fluent Configurations (Which you should!), then it’s just as easy

modelBuilder.Entity<People>()
	.Property(p => p.Version)
	.IsConcurrencyToken();

But There’s A Catch!

So that all sounds great! But there’s a catch, a small one, but one that can be quite annoying.

The problem is that short of some sort of database trigger (ugh!), or some sort of database auto increment field, it’s up to you, the developer, to ensure that you increment the version everytime you do an update. Now you can obviously write some EntityFramework extensions to get around this and auto increment things in C#, but it can complicated really fast.

And that’s where a RowVersion comes in.

What Is A RowVersion?

Let’s start in pure SQL Server terms what a RowVersion is. RowVersion (Also known as Timestamp, they are the same thing), is a SQL column type that uses auto generated binary numbers that are unique across that database, and stamped on records. Any time a record is inserted or updated on a table with a row version, a new unique number is generated (in binary format) and given to that record. Again, the RowVersions are unique across that entire database, not just the table.

Now in EntityFramework/EFCore it actually takes a somewhat different meaning because of what the SQL RowVersion is actually used to *achieve*.

Typically inside EF, when someone describes using a RowVersion, they are describing using a RowVersion/Timestamp column as a *ConcurrencyToken*. Now if you remember earlier the issue with just using a pure ConcurrencyToken was that we had to update/increment the value ourselves, but obviously if SQL Server is auto updating using RowVersion, then problem solved!

It actually gets more interesting if we take a look at how EFCore actually works out whether to use a RowVersion or not. The actual code is here : https://github.com/dotnet/efcore/blob/master/src/EFCore/Metadata/Builders/PropertyBuilder.cs#L152

public virtual PropertyBuilder IsRowVersion()
{
	Builder.ValueGenerated(ValueGenerated.OnAddOrUpdate, ConfigurationSource.Explicit);
	Builder.IsConcurrencyToken(true, ConfigurationSource.Explicit);

	return this;
}

Calling IsRowVersion() is actually shorthand for simply telling EFCore that the property is a ConcurrencyToken and it’s AutoGenerated. So in actual fact, if you added both of these configurations to a property manually, EF Core would actually treat it like a RowVersion even though you haven’t explicitly said it is.

We can see this by checking the code that asks if a column is a RowVersion here : https://github.com/dotnet/efcore/blob/master/src/EFCore.Relational/Metadata/IColumn.cs#L56

bool IsRowVersion => PropertyMappings.First().Property.IsConcurrencyToken
					&& PropertyMappings.First().Property.ValueGenerated == ValueGenerated.OnAddOrUpdate;

So all it actually does is interrogate whether the column is a concurrency token and auto generated. Easy!

I would note that if you actually had a column that you auto incremented some other way (DB Trigger for example), and was also a concurrency token.. I’m pretty sure EFCore would have issues actually handling this, but that’s for another day.

In EntityFramework you can setup a RowVersion on a property like so for DataAnnotations :

[TimeStamp]
public byte[] RowVersion{ get; set; }

And for Fluent Configurations:

modelBuilder.Entity<People>()
	.Property(p => p.RowVersion)
	.IsRowVersion();

Even though you specify that a column should be a RowVersion, the actual implementation of how that works (e.g. The datatype, specific settings on how that gets updated), is actually very dependent on the SQL Server (And SQL C# Adapter). Different databases can implement RowVersion how they like, but typically in SQL Server atleast, it’s a byte[] type.

Note that when using RowVersion with EntityFramework, there is nothing more you really need to do to get up and running. Anytime you update a record with a RowVersion property, it will automatically add that column to the WHERE statement giving you optimistic concurrency right out of the box.

So ConcurrencyToken vs RowVersion?

So if we go back to the original question of when you should use Concurrency Token vs when you should use a RowVersion. The answer is actually very simple. If you want to use a ConcurrencyToken as an auto incremented field, and you don’t actually care how it gets incremented or the data type, then use RowVersion. If you care about what the data type of your concurrency token should be, or you specifically want to control how and when it gets updated, then use Concurrency Token and manage the incrementing yourself.

What I’ve generally found is that when people have suggested to me to use Concurrency Token’s, typically what they actually mean is using RowVersion. Infact it’s probably easier to say that RowVersion (In the Entity Framework sense) is a *type* of Concurrency Token.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I contract/freelance out a lot to companies that are dipping their toes into .NET Core, but don’t want to use Microsoft SQL Server – so they either want to use PostgreSQL or MYSQL. The thing that gets me is often these companies are so wary about the ability for .NET Core to talk to anything non-Microsoft. The amount of time I’ve spent on calls trying to explain it really doesn’t matter for the most part which tech choice they go with if all they are expecting from .NET Core’s point of view is to run simple commands.

Maybe if you’re overlaying something like EF Core or a very heavy ORM you might have issues. But in my experience, when using something like Dapper that allows you to really control the queries you are running, it really doesn’t make a heck of a lot of difference between any SQL Server.

I would also add that for both MySQL and Postgres, I’ve had .NET Core apps running inside Linux (Containers and VM’s) with absolutely no issue. That also seems to get asked a lot, “OK so this can talk to MySQL but can it talk to MySQL from Linux”… errr… yes, yes it can!

This is going to be a really short and sweet post because there really isn’t a lot to it!

Intro To Dapper

If you’ve never used Dapper before, I highly recommend this previous write up on getting started with Dapper. It covers a lot of the why and where we might use Dapper, including writing your first few queries with it.

If you want to skip over that. Just understand that Dapper is a lightweight ORM that handles querying a database and turning the rows into your plain objects with very minimal fuss and overhead. You have to write the queries yourself, so no Linq2SQL, but with that comes amazing control and flexibility. In our case, that flexibility is handy when having to write slightly different commands across different types of SQL Databases, because Dapper itself doesn’t have to translate your LINQ to actual queries, instead that’s on you!

MySQL With Dapper

When working with MySQL in .NET Core, you have to install the following nuget package :

Install-Package MySql.Data

Normally when creating a SQL Connection you would do something like so :

using (var connection = new SqlConnection("Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;"))
{
	connection.Query<MyTable>("SELECT * FROM MyTable");
}

With MySQL you would do essentially the same thing but instead you use the MySQLConnection class :

using (var connection = new MySqlConnection("Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"))
{
	connection.Query<MyTable>("SELECT * FROM MyTable");
}

And that’s pretty much it! Obviously the syntax for various queries may change (e.g. Using LIMIT in MySQL instead of TOP in MSSQL), but the actual act of talking to the database is all taken care for you and you literally don’t have to do anything else.

PostgreSQL With Dapper

If you’ve read the MySQL portion above.. well.. You can probably guess how Postgres is going to go.

First install the following nuget package :

Install-Package Npgsql

Then again, our normal SQL Connection looks like so :

using (var connection = new SqlConnection("Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;"))
{
	connection.Query<MyTable>("SELECT * FROM MyTable");
}

And our Postgres connection instead looks like so using the NpgsqlConnection class :

using (var connection = new NpgsqlConnection("User ID=root;Password=myPassword;Host=localhost;Port=5432;Database=myDataBase;"))
{
	connection.Query<MyTable>("SELECT * FROM MyTable");
}

Too easy!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was writing some reflection code the other day where I wanted to search through all of my assemblies for a particular interface, and call a method on it at startup. Seemed pretty simple but in reality there is no clear, easy, one size fits all way to get assemblies. This post may be a bit dry for some, but if I help just one person from banging their head against the wall with this stuff, then it’s worth it!

I’m not really going to say “Use this one”, because of the ways to get assemblies, it’s highly likely only one way will work for your particular project so it’s pointless trying to lean towards one or the other. Simply try them all and see which one makes the most sense!

Using AppDomain.GetAssemblies

So the first option you might come across is AppDomain.GetAssemblies. It (seemingly) loads all Assemblies in the AppDomain which is to say essentially every Assembly that is used in your project. But there is a massive caveat. Assemblies in .NET are lazy loaded into the AppDomain. It doesn’t all at once load every assembly possible, instead it waits for you to make a call to a method/class in that assembly, and then loads it up – e.g. Loads it Just In Time. Makes sense because there’s no point loading an assembly up if you never use it.

But the problem is that at the point of you call AppDomain.GetAssemblies(), if you have not made a call into a particular assembly, it will not be loaded! Now if you are getting all assemblies for a startup method, it’s highly likely you wouldn’t have called into that assembly yet, meaning it’s not loaded into the AppDomain!

Or in code form :

AppDomain.CurrentDomain.GetAssemblies(); // Does not return SomeAssembly as it hasn't been called yet. 
SomeAssembly.SomeClass.SomeMethod();
AppDomain.CurrentDomain.GetAssemblies(); // Will now return SomeAssembly. 

So while this might look like an attractive option, just know that timing is everything with this method.

Using The AssemblyLoad Event

Because you can’t be sure when you call CurrentDomain.GetAssemblies() that everything is loaded, there is actually an event that will run when the AppDomain loads another Assembly. Basically, when an assembly is lazy loaded, you can be notified. It looks like so :

AppDomain.CurrentDomain.AssemblyLoad += (sender, args) =>
{
    var assembly = args.LoadedAssembly;
};

This might be a solution if you just want to check something when Assemblies are loaded, but that process doesn’t necessarily have to happen at a certain point in time (e.g. Does not have to happen within the Startup.cs of your .NET Core app).

The other problem with this is that you can’t be sure that by the time you’ve added your event handler, that assemblies haven’t already been loaded (Infact they most certainly would have). So what then? You would need to duplicate the effort by first adding your event handler, then immediately after checking AppDomain.CurrentDomain.GetAssemblies for things that have already been loaded.

It’s a niche solution, but it does work if you are fine with doing something with the lazy loaded assemblies.

Using GetReferencedAssemblies()

Next cab off the rank is GetReferencedAssemblies(). Essentially you can take an assembly, such as your entry assembly which is typically your web project, and you find all referenced assemblies. The code itself looks like this :

Assembly.GetEntryAssembly().GetReferencedAssemblies();

Again, looks to do the trick but there is another big problem with this method. In many projects you have a separation of concerns somewhere along the lines of say Web Project => Service Project => Data Project. The Web Project itself doesn’t reference the Data Project directly. Now when you call “GetReferencedAssemblies” it means direct references. Therefore if you’re looking to also get your Data Project in the assembly list, you are out of luck!

So again, may work in some cases, but not a one size fits all solution.

Looping Through GetReferencedAssemblies()

You’re other option for using GetReferencedAssemblies() is actually to create a method that will loop through all assemblies. Something like this :

public static List GetAssemblies()
{
    var returnAssemblies = new List();
    var loadedAssemblies = new HashSet();
    var assembliesToCheck = new Queue();

    assembliesToCheck.Enqueue(Assembly.GetEntryAssembly());

    while(assembliesToCheck.Any())
    {
        var assemblyToCheck = assembliesToCheck.Dequeue();

        foreach(var reference in assemblyToCheck.GetReferencedAssemblies())
        {
            if(!loadedAssemblies.Contains(reference.FullName))
            {
                var assembly = Assembly.Load(reference);
                assembliesToCheck.Enqueue(assembly);
                loadedAssemblies.Add(reference.FullName);
                returnAssemblies.Add(assembly);
            }
        }
    }

    return returnAssemblies;
}

Rough around the edges but it does work and means that on startup, you can instantly view all assemblies.

The one time you might get stuck with this is if you are loading assemblies dynamically and so they aren’t actually referenced by any project. For that, you’ll need the next method.

Directory DLL Load

A really rough way to get all solution DLLs is actually to load them out of your bin folder. Something like :

public static Assembly[] GetSolutionAssemblies()
{
    var assemblies = Directory.GetFiles(AppDomain.CurrentDomain.BaseDirectory, "*.dll")
                        .Select(x => Assembly.Load(AssemblyName.GetAssemblyName(x)));
    return assemblies.ToArray();
}

It works but hoooo boy it’s a rough one. But the one big boon to possibly using this method is that a dll simply has to be in the directory to be loaded. So if you are dynamically loading DLLs for any reason, this is probably the only method that will work for you (Except maybe listening on the AppDomain for AssemblyLoad).

This is one of those things that looks like a hacktastic way of doing things, but you actually might be backed into the corner and this is the only way to solve it.

Getting Only “My” Assemblies

Using any of these methods, you’ll quickly find you are loading every Assembly under the sun into your project, including Nuget packages, .NET Core libraries and even runtime specific DLLs. In the .NET world, an Assembly is an Assembly. There is no concept of “Yeah but this one is my Assembly” and should be special.

The only way to filter things out is to check the name. You can either do it as a whitelist, so if all of your projects in your solution start with the word “MySolution.”, then you can do a filter like so :

Assembly.GetEntryAssembly().GetReferencedAssemblies().Where(x => x.Name.StartsWith("MySolution."))

Or instead you can go for a blacklist option which doesn’t really limit things to just your Assemblies, but at the very least cuts down on the number of Assemblies you are loading/checking/processing etc. Something like :

Assembly.GetEntryAssembly().GetReferencedAssemblies()
.Where(x => !x.Name.StartsWith("Microsoft.") && !x.Name.StartsWith("System."))

Blacklisting may look stupid but in some cases if you are building a library that you actually don’t know the end solutions name, it’s the only way you can cut down on what you are attempting to load.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Even with my love for Dapper these days, I often have to break out EF Core every now and again. And one of the things that catches me out is just how happy some developers are to make an absolute meal out of Entity Configurations with EF Core. By Entity Configurations, I mean doing a code first design and being able to mark fields as “Required”, or limit their length, or even create indexes on tables. Let’s do a quick dive and see what our options are and what gives us the cleanest result.

Attribute (Data Annotations) vs Fluent Configuration

So the first thing you notice when you pick up EF Core is that half the documentation tells you you can simply add an attribute to any entity to mark it as required :

[Required]
public string MyField { get; set; }

And then the other half of the documentation tells you you should override the OnModelCreating inside your context and use “Fluent” configuration like so :

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
	base.OnModelCreating(modelBuilder);
	modelBuilder.Entity<MyEntity>()
		.Property(x => x.MyField).IsRequired();
}

Which one is correct? Well actually both! But I always push for Fluent Configurations and here’s why.

There is an argument that when you use attributes like [Required] on a property, it also works if your model is being returned/created via an API. e.g. It also provides validation. This is really a moot point. You should always aim to have specific ViewModels returned from your API and not return out your entity models. Ever. I get that sometimes a quick and dirty internal API might just pass models back and forth, between the database and the API, but the fact that attributes work for both is simply a coincidence not an actual intended feature.

There’s also an argument that the attributes you use come from the DataAnnotations library from Microsoft. Many ORMs use this library to configure their data models. So for example if you took an EF Core entity and switched to using another ORM, it may be able to just work out of the box with the same configurations. I mean, this one is true and I do see the point but as we are about to find out, complex configuration simple cannot be done with data annotations alone and therefore you’re still going to have to do rework anyway.

The thing is, Fluent Configurations are *much* more powerful than Data Annotations. Complex index that spans two fields and adds another three as include columns? Not a problem in Fluent but no way to do it in DataAnnotations (Infact Indexes in general got ripped out of attributes and are only just now making their way back in with a much weaker configuration than just using Fluent https://github.com/dotnet/efcore/issues/4050). Want to configure a complex HasMany relationship? DataAnnotations relationships are all about conventions so breaking that is extremely hard whereas in Fluent it’s a breeze. Microsoft themselves have come out and said that Fluent Configuration for EF Core is an “Advanced” feature, but I feel like anything more than just dipping your toe into EF Core, you’re gonna run into a dead end with Data Annotations and have to mix in Fluent Configuration anyway. When it gets to that point, it makes even less sense to have your configuration split across Attributes and Fluent.

Finally, from a purely aesthetic stand point, I personally prefer my POCOs (Plain Old C# Objects) to be free of implementation details. While it’s true that in this case, I’m building an entity to store in a SQL Database, that may not always be the case. Maybe in the future I store this entity in a flat XML file. I think adding attributes to any POCO changes it from describing a data structure, to describing how that data structure should be saved. Then again, things like Active Record exist so it’s not a hard and fast rule. Just a personal preferences.

All rather weak arguments I know but honestly, before long, you will have to use Fluent Configuration for something. It’s just a given. So it’s much better to just start there in the first place.

Using IEntityTypeConfiguration

So if you’ve made it past the argument of Attributes vs Fluent and decided on Fluent. That’s great! But you’ll quickly find that all the tutorials tell you to just keep jamming everything into the “OnModelCreating” method of your Context. Kinda like this :

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
	base.OnModelCreating(modelBuilder);
	modelBuilder.Entity<MyEntity>()
		.Property(x => x.MyField).IsRequired();
}

What if you have 10 tables? 20 tables? 50? This class is going to quickly surpass hundreds if not thousands of lines and no amount of comments or #regions is going to make it more readable.

But there’s also a (seemingly) much less talked about feature called IEntityTypeConfiguration. It works like this.

Create a class called {EntityName}Configuration and inherit from IEntityTypeConfiguration<Entity>.

public class MyEntityConfiguration : IEntityTypeConfiguration<MyEntity>
{
	public void Configure(EntityTypeBuilder<MyEntity> builder)
	{
	}
}

You can then put any configuration for this particular model you would have put inside the context, inside the Configure method. The builder input parameter is scoped specifically to only this entity so it keeps things clean and tidy. For example :

public class MyEntityConfiguration : IEntityTypeConfiguration<MyEntity>
{
	public void Configure(EntityTypeBuilder<MyEntity> builder)
	{
		builder.Property(x => x.MyField).IsRequired();
	}
}

Now for each entity that you want to configure. Keep creating more configuration files, one for each type. Almost a 1 to 1 mapping if you will. I like to put them inside an EntityConfiguration folder to keep things nice and tidy.

Finally. Head back to your Context and delete all the configuration work that you’ve now moved into IEntityTypeConfigurations, and instead replace it with a call to “ApplyConfigurationFromAssembly” like so :

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
	modelBuilder.ApplyConfigurationsFromAssembly(typeof(MyContext).Assembly);
}

Now you have to pass in the assembly where EF Core can find the configurations. In my case I want to say that they are in the same assembly as MyContext (Note this is *not* DbContext, it should be the actual name of your real context). EF Core will then go and find all implementations of IEntityTypeConfiguration and use that as config for your data model. Perfect!

I personally think this is the cleanest possible way to configure Entities. If you need to edit the configuration for the Entity “Address”, then you know you just have to go the “AddressConfiguration”.  Delete an entity from the data model? Well just delete the entire configuration file. Done! It’s really intuitive and easy to use.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently came across an interesting params gotcha (Or more like a trap) recently while working around a method in C# that allowed both a params and an IEnumerable parameter. If that sounds confusing, let me give a quick refresher.

Let’s say I have a method that looks like so :

static void Call(IEnumerable<object> input)
{
    Console.WriteLine("List Object");
}

Pretty normal. But sometimes when calling this method, I have say two items, and I don’t want to “new up” a list. So using this current method, I would have to do something like so :

var item1 = new object();
var item2 = new object();

Call(new List<object> { item1, item2 });

Kind of ugly. But there is also the params keyword that allows us to pass in items seperated by commas, and by magic, it turns into an array inside the method. For example :

static void Call(params object[] input)
{
    Console.WriteLine("Object Params");
}

Now we can just do :

var item1 = new object();
var item2 = new object();

Call(item1, item2);

Everything is perfect! But then I ran into an interesting conundrum that I had never seen before. Firstly, let’s suppose I pass in a list of strings to my overloaded call. The code might look like so :

static void Main(string[] args)
{
    Call(new List<string>());
}

static void Call(params object[] input)
{
    Console.WriteLine("Object Params");
}

static void Call(IEnumerable<object> input)
{
    Console.WriteLine("List Object");
}

If I ran this code, what would be output? Because I’ve passed it a List<string>, which is a type of IEnumerable<object>, you might think it would output “List Object”. And… You would be right! It does indeed use the IEnumerable method which makes total sense because List<string> is a type of IEnumerable<object>. But interestingly enough… List<string> is also an object… So theoretically, it could indeed actually be passed to the params call also. But, all is well for now and we are working correctly.

Later on however, I decide that I want a generic method that does some extra work, before calling the Call method. The code looks like so :

static void Main(string[] args)
{
    GenericalCall(new List<string>());
}

static void GenericalCall<T>(List<T> input)
{
    Call(input);
}

static void Call(params object[] input)
{
    Console.WriteLine("Object Params");
}

static void Call(IEnumerable<object> input)
{
    Console.WriteLine("List Object");
}

Well theoretically, we are still giving it a List of T. Now T could be anything, but in our case we are passing it a list of strings same as before so you might expect it to output “List Object” again. Wrong! It actually outputs “Object Params”! Why?!

Honestly. I’m just guessing here. But I think I’ve deduced why. Because the type T could be anything, the compiler isn’t actually sure that it should be able to call the IEnumerable overload as whatever T is, might actually not inherit from object (Although, we know it will, but “theoretically” it could not). Because of this, it treats our List<T> as a single object, and passes that single item as a param into the params call. Crazy! I actually thought maybe at runtime it might try and inspect T and see what type it is to deduce the right call path, but it looks like it happens at compile time.

This is confirmed if we actually add a constraint to our generic method that says the type of T must be a class (Therefore has to be derived from object). For example :

static void Main(string[] args)
{
    GenericalCall(new List<string>());
}

static void GenericalCall<T>(List<T> input) where T : class
{
    Call(input);
}

static void Call(params object[] input)
{
    Console.WriteLine("Object Params");
}

static void Call(IEnumerable<object> input)
{
    Console.WriteLine("List Object");
}

Now we return the List Object output because we have told the compiler ahead of time that T will be a class, which all objects inherit from Object. Easy!

Another way to solve this is to force cast the List<T> to IEnumerable<object> like so :

static void GenericalCall<T>(List<T> input)
{
    Call((IEnumerable<object>)input);
}

Anyway, hopefully that wasn’t too much of a ramble. I think this is one of those things that you sort of store away in the back of your head for that one time it actually occurs.

Where Did I Actually See This?

Just as a little footnote to this story. I actually saw this when trying to use EntityFramework’s “HasData” method.

I had this line inside a generic class that helped load CSV’s as data into a database.

modelBuilder.Entity(typeof(T)).HasData(seedData);

I kept getting :

The seed entity for entity type ‘XXX’ cannot be added because there was no value provided for the required property ‘YYY’

And it took me a long time to realize HasData has two overloads :

HasData(Enumerable<object> data);
HasData(params object[] data);

So for me, it was as easy as casting my seedData input to IEnumerable.

modelBuilder.Entity(typeof(T)).HasData((IEnumerable<object>)seedData)
ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.