This is Part 3 of a series on using the Mediator Pattern in .NET Core. You’ve probably missed out on some important stuff if you’re starting here so make sure to head back and read Part 1 before reading on!

The Mediator Pattern In .NET Core

Part 1 – What’s A Mediator?
Part 2 – Roll Your Own
Part 3 – MediatR

MediatR Library

The MediatR library describes itself as “Simple, unambitious mediator implementation in .NET”. In particular, I like the word “Unambitious” being used. It’s somewhat refreshing in a world of Hacker News posts that claim to be releasing a library that will change the way we write code forever. You may also recognize the author of MediatR as Jimmy Bogard who also maintains AutoMapper!

MediatR is essentially a library that allows in process messaging – which in turn allows you to follow the Mediator Pattern! Easy! Let’s get started

Installing MediatR

The first thing we need to do is install the MediatR nuget package. So from your package manager console run :

We also need to install a package that allows us to use the inbuilt IOC container in .NET Core to our advantage (We’ll see more of that shortly). So also install the following package :

Finally we open up our startup.cs file. In our ConfigureServices method, we need to add in a call to register all of MediatR’s dependencies.

Creating Our Handlers

The first thing to note is that MediatR can be either do “send and receive” type messages, or it can do a “broadcast” type message. Taking our example from our previous article in the series, we are doing a more broadcast style of message (And a pretty simple one at that). Let’s just stick with that for now.

In our original example, we weren’t passing through any information to our handlers, but in reality we are likely passing through some data. Basically a message if we think about traditional messaging systems. Let’s go ahead and just create a blank object that inherits from INotification  (An inbuilt type of MediatR).

Next we need handlers for the messages. That’s easy too, we just inherit from INotificationHandler  and go from there!

Super simple! We also don’t need to register these handlers anywhere, the initial line we added to our ConfigureServices method means that MediatR finds all handlers within the assembly and registers them correctly.

Creating Our Mediator Service

Now here’s the thing. We *can* just inject in the inbuilt IMediator interface everywhere and publish messages directly. Personally I’m not a huge fan because it’s basically telling everywhere “by the way, we use MediatR”. I prefer to abstract things away a little bit.

If we take our Mediator Service from the previous article, let’s just modify it a bit. Instead let’s build it like so :

So we have a NotifierService that still uses the IMediator class under the hood, but it means if we ever swap out libraries or change how we do notifications, only this class changes.

Because we have created a new service, we do need to remember to add the following line to our ConfigureServices method in our startup.cs file :

Using Our Notifier Mediator Service

Let’s use a super simple controller to run things. Again this is basically taken from Part 2 in this series and just modified a tiny bit to work with passing through notify text.

Running all of this and opening up our debug panel we can see :

Woo! Literally a 5 minute job to set up a completely in memory messaging system!

Final Thoughts On The Mediator Pattern

As we made our way through these 3 posts, it’s probably morphed a bit from the “Mediator Pattern” to “In Process Messaging”. But going back to our key bullet points from Part 1 :

  • It’s an object that encapsulates how objects interact. So it can obviously handle passing on “messages” between objects.
  • It promotes loose coupling by not having objects refer to each other, but instead to the mediator. So they pass the messages to the mediator, who will pass it on to the right person.

We can see that In Process Messaging is actually just an “implementation” of the mediator pattern. As long as we are promoting loose coupling through a “mediator” class that can pass data back and forth so that the caller doesn’t need to know how things are being handled (By whom and by how many handlers), then we can say we are implementing the Mediator Pattern.

This is Part 2 of a series on using the Mediator Pattern in .NET Core. You’ve probably missed out on some important stuff if you’re starting here so make sure to head back and read Part 1 before reading on!

The Mediator Pattern In .NET Core

Part 1 – What’s A Mediator?
Part 2 – Roll Your Own
Part 3 – MediatR

The “IEnumerable” Pattern

The “IEnumerable” Pattern is something I’ve used for the past 5 or so years in almost every single project I’ve worked on. It came about as a “that’s a cool feature” type thing when I found out that several IOC Containers would automatically inject an IEnumerable<T> into a class if you bind multiple types to the same interface (That may sound confusing but it will make sense once we get into the code).

This little hunk of code that I use isn’t specifically a “Mediator” pattern. But when I think about a core defining characteristic of the Mediator pattern :

“It promotes loose coupling by keeping objects from referring to each other explicitly”

Then this gets very close to doing so.

Let’s look at an example. For this I am going to use a standard .NET Core Web API project. Nothing fancy. In it, I have an interface with two implementations of that interface.

In my startup.cs, I then bind these two classes to the INotifier interface.

Then I have a super simple controller just to demonstrate what we’ve done.

I hit this endpoint and in my debug window, what do I see?

Super simple right? Now as I add, remove, modify the notifiers, or they add extra logic on when they need to be run. The list itself is simply injected into the controller each time (And could be injected anywhere else). In this way, the caller doesn’t actually need to change when additional classes are added or removed, and it doesn’t need to expand out it’s action, it stays exactly the same.

Making It A Bit More “Mediator”-ish

Someone looking at this might feel that it still leaks a little bit of implementation detail. But that’s easily wrapped up if we really wanted to go all out on a mediator service class. For example, let’s create a service like this :

In our ConfigureServices method of our startup.cs, we need to register this new service with the following line :

And finally, we need to go and wire this all up for use. Let’s change our controller to look a bit more like this :

That’s pretty snazzy and fits more with the Mediator pattern, but still affords us the niceness of our Mediator not having to constantly change when we add or remove handlers!

Conditional Handlers

While not strictly related to the Mediator Pattern, you’ll also often find that Mediator Pattern libraries afford ways to conditionally run a handler. This could be based on particular conditions or just a type. An extremely common pattern that I end up with is some sort of “CanRun” method being put onto the interface. It could take inputs or it could just check some data in the background. It might look like so :

Then when we need to run it from a list, we can just use Linq to filter it :

I’ve done this in a tonne of different ways with huge success.

What’s Next?

In the next article in this series we are going to take a look at the “MediatR” library. A very popular in .NET that utilizes the Mediator Pattern. You can check out that article here!

This is part 1 of a series on using the Mediator Pattern in .NET Core. It’s a pretty good place to get started!

The Mediator Pattern In .NET Core

Part 1 – What’s A Mediator?
Part 2 – Roll Your Own
Part 3 – MediatR

A couple of years back, I had to help out on a project that was built entirely using the “Mediator Pattern”. Or more specifically, built entirely using the MediatR library. There were all these presentations about the “theory” behind the Mediator Pattern and how it was a real new way of thinking. I couldn’t help but think… We’ve been doing this for years. Except we just call it good programming… Infact I had my own pattern which we’ll look into in Part 2 that I called the “IEnumerable Pattern” which achieved the same thing.

But it’s taken all these years to finally write it all down. So here it is. Here’s the Mediator Pattern in C# (Or more specifically, .NET Core).

The Mediator Pattern “Definition”

The Mediator Pattern actually dates all the way back to 1994 in the famous book “Design Patterns: Elements of Reusable Object-Oriented Software”. But I feel like it’s only really sprung up again lately due to a slew of libraries trying to implement the pattern.

In a nutshell, the definition (as stolen from Wikipedia) is :

The essence of the Mediator Pattern is to “define an object that encapsulates how a set of objects interact”. It promotes loose coupling by keeping objects from referring to each other explicitly, and it allows their interaction to be varied independently. Client classes can use the mediator to send messages to other clients, and can receive messages from other clients via an event on the mediator class.

So let’s break it down a little into two bullet points that we will refer back to later.

  • It’s an object that encapsulates how objects interact. So it can obviously handle passing on “messages” between objects.
  • It promotes loose coupling by not having objects refer to each other, but instead to the mediator. So they pass the messages to the mediator, who will pass it on to the right person.

That’s honestly it.

And when you think about just those two bullet points in isolation. It sounds awfully like a message hub of sorts right? That’s because… It actually kinda is. It’s like a message hub in code. When you send a message through a typical message hub, you don’t know who is receiving that message, you just know that the hub knows and it will sort it out for you.

In Visual Form

If we break this out into visual form using my (very limited) lucidchart skills. It looks a bit like this :

This is probably a simplified version of it because a Mediator Pattern does allow two way communication, it’s not just a one way broadcast, but I think this is the model we are going to try and use going forward in our examples.

Again, looking at it this way, it’s hard not to see the comparisons to messaging systems. But on the other hand, it’s hard not to also feel like this could very quickly turn into one of those “super” classes where sure, MyService doesn’t reference every handler… But the Mediator does. But there are ways to handle that which we will go into later.


And finally, the “Why?”. Why is this even a thing?

Well if we take the diagram above, if we had MyService calling other handlers directly (For example notifying them about an action), then as we add handlers, MyService has to start referencing them all even if it doesn’t care about the result. For example, our service might start looking like this :

So what happens when we add more handlers? Or remove handlers? Our service keeps changing when in reality it doesn’t really care who gets notified.

Using a Mediator Pattern, it may instead end up looking like :

So there’s the bonus that as handlers change, get added or removed, the service itself doesn’t change. But there is also a bit of a downer that we are maybe shifting the load to the Mediator, it’s job is now to manage the handlers and how they get notified. But this makes sense right! To have a class whose sole job is to notify clients should be able to change depending on how those clients need to be notified. And our service which really doesn’t care about the implementation details of those handlers can get on with it’s work.

In saying that, later on we will see how we use DI to really help us ease the load from both classes and yet still stick to heart of the Mediator Pattern.

What’s Next?

In the next article in this series, we are going to look at a pattern that I dubbed the “IEnumerable” pattern. It’s essentially the Mediator Pattern with some dependency injection thrown in! You can check out that article here!

Hosted Services in the .NET Core world mean background tasks in everyday developer terms. If you’re living in the C# world, and even the Azure world, you actually already have a couple of options for doing background style tasks. Problems that you can solve using Hosted Services are probably similar to the problems you currently solve using Windows Services or Azure WebJobs. So first, let’s take a look at some comparisons.

Hosted Services vs WebJobs

WebJobs are similar to hosted services in that they run on the same machine as an Azure Website – thus sharing resources. WebJobs do have the added benefit of being able to be deployed separately (Hosted services are part of the website) and have additional functionality when it comes to running as a singleton. With Hosted Services, there is an instance running of that hosted service for every deployment of your website which can be an issue if you only want one instance of that “process” running at anytime. You can program around this by creating your own locking mechanism, but obviously webjobs gets this out of the box. The one other difference really is that with the WebJobs SDK, you get things like queue/blob triggers right away. Inside a ASP.NET Core Hosted Service, you would need to write all of this manually.

Hosted Services vs Windows Services

Windows Services are typically hosted on other infrastructure that isn’t also hosting your website. They can be deployed independently, and don’t have really any tie in to your website. But that also comes as a negative. If you are using PAAS on something like Azure or Google Cloud, you would then need a separate VM to hosted your Windows Service. Not too great!

Hello World Hosted Service

Create a new class in your .NET Core Web Project called “HelloWorldHostedService”. This is going to be your first piece of code for your Hosted Service :

So all we are doing is starting a timer that every 10 seconds, prints out “Hello World!” to the debug output.

You’ll notice that we have a start and stop, probably pretty similar to a windows service where the StartAsync method is when our web process kicks off. It shouldn’t be where we do work, but instead where we setup side threads to do the work. And stop is where our web process stops, so we obviously have to pause our timer.

You’ll also need the following line, in your ConfigureServices method, in your startup.cs.

And that’s it, that’s your first hosted service. If you run your web project, your service should kick off (Either set a breakpoint or check the Debug output), and away you go!

Background Service Helper

The thing is, each time you write a background service, it’s likely to be the same sort of process. It’s probably going to either be something on a timer, or in an infinite loop. So 9 times out of 10, you’re going to be using some sort of boiler plate that kicks off your task on a different thread, and handles cancellation tokens etc.

Well Microsoft foresaw this, and create an abstract helper class called BackgroundService. We can rewrite our above code to instead look like :

Notice that we now inherit from BackgroundService and not from IHostedService. But our startup.cs call doesn’t change.

Behind the scenes, the code for BackgroundService looks like (And you can see the full source on Github here) :

So it’s actually pretty simple. It’s just kicking off a thread with your work, and waiting for it to finish (If it finishes). It’s handling the cancellation tokens for you, and a bit of cleanup.

Good Examples Hosted Services

I thought I would end on what makes a good hosted service. It should be something preferably specific to that instance of the website. If you scale out horizontally, each instance of the hosted service shouldn’t compete with each other and cause issues. So things like clearing a shared cache, moving files/data, ETL processes etc may not be great candidates for hosted services because each service will be competing with each other.

An example of a service I wrote recently was that I built an API that read data from a secondary replica database. I didn’t mind if the replica fell behind in updates from the primary, but it still couldn’t fall too far (e.g. More than 15 minutes). If it fell more than 15 minutes, I wanted to force my API to instead read from the primary. Each website checking the secondary status and swapping to read from primary is very scalable because the check itself is atomic and no matter how many times it’s called, it doesn’t interfere with each other.

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.


With IOT becoming bigger and bigger, it makes sense for C# to add a way to iterate over an IEnumerable in an async way while using the yield keyword to get data as it comes in. For example retrieving data signals from an IOT box, we would want to be receiving data and processing it as it is retrieved, but not in a way that blocks CPU while we wait. This is where IAsyncEnumerable comes in!

But We Already Have Async Enumerable Right?

So a common trap to fall into might be that you want to use a return type of Task<IEnumerable<T>>  and make it async. Something like so :

So if we ran this application, what would be the result? Well nothing would appear for 10 seconds, and then we would see all of our datapoints all at once. It’s not going to be thread blocking, it’s still async, but we don’t get the data as soon as we receive it. It’s less IEnumerable and more like a List<T>.

What we really want, is to be able to use the yield  keyword, to return data as we receive it to be processed immediately.

Using Yield With IAsyncEnumerable

So knowing that we want to use yield , we can actually use the new interface in C# 8 called IAsyncEnumerable<T> . Here’s some code that does just that :

So some pointers about this code :

  • Notice that we await our foreach loop itself rather than awaiting the FetchIOTData method call within the foreach loop.
  • We are returning a type of IAsyncEnumerable<T> and not IEnumerable<T>

Other than that, the code should be rather straight forward.

When we run this, instead of 10 seconds of nothing and then all data dumped on us, we get each piece of data as it comes. Ontop of this, the call is still not blocking.

Early Adopters Bonus

If you are using this in late 2018 or early 2019, you’re probably going to have fun trying to get all of this to compile. Common compile errors include :

As detailed in this Github issue coreclr repo, there is a mismatch between the compiler and the library.  You can get around this by copy and pasting this code block into your project until everything is back in sync again.

There is even another issue where iteration will stop early (Wow!). It looks like it will be fixed in Preview 2. I couldn’t replicate this in my particular scenarios, but just know that it might happen.

Both of these issues maybe point to it being early days and not quite anything but proof of concept yet.

I’m calling it now, 2019 is going to be the year of action! .NET Core Tutorials has been running strong for over 2 years now (We were there right from the project.json days). Back then, people were fine reading all about the latest features of .NET Core. But these days, people want to be shown. Live! That’s why I’m starting live coding sessions on every Wednesday and Sunday. I’ll be covering a variety of topics, but you’ll often catch me doing live blog writing sessions where you can get a sneak peak of content before it goes live and ask any questions while I’m doing live coding. We’ve already had some great questions in chat while doing our post on C# 8 IAsyncEnumerable.

So get amongst it. Head on over, drop a follow, and tune in next time!

But enough about me, I want it to be a year of action for you too. That’s why I’m giving away a copy of *any* of Manning’s classic “In Action” books. You know, those ones with the ye olde covers. Like this Core In Action book by Andrew Lock (Who is a great author by the way!).

I bold the word *any* above because it’s not just Core In Action, but actually any “In Action” book you want. Want to learn Redux in 2019? Try Redux In Action. Want to learn all about Amazon Web Services? Try AWS In Action. How about learning a bit of data science in R? Well… You guessed it, try R In Action. Infact just head over to Amazon and check out any of the In Action titles and start drooling over which one you want if you win!

Use any of the entry options below (or multiple), and you are in to win! It’s that easy!

Enter Now

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Have You Seen Ranges Yet?

The Index struct is pretty similar to ranges which also made it into C#. If you haven’t already, go check out the article on Ranges in C# 8 here.

It’s not a pre-requisite to know ranges before you use the new Index type, but it’s probably a pretty good primer and they share the new “hat” ^ character.


If you wanted to get the last item of an array, you would typically see code like this :

If you wanted second to last you would end up doing something like :

If you have a List<T>, you would have a couple of different options, mostly involving Linq. So something like this :

But with arrays, what we notice is that we are still always counting from the front. So we are passing in that we want index 2, not that we want the last index persay.

With C# 8 however, we have the ability to do this :

So we can pass in the “hat” character, and this signifies that we are looking for the index X away from the end of the array. You’ll also notice something very important. Whereas junior developers have struggled since the dawn of time to wrap their head around arrays starting at index “0”. When we use the hat character, we are saying we want to count from the end but inclusive. So saying ^1 means the last index, not the second to last index. Weird I know.

And that’s the new index character in a nutshell!

Index As A Type

When you say ^1, you are actually creating a new struct in C# 8. That struct being Index. For example :

All of these create a new struct of Index that hold a value of the last index item.

You might want to use it in this way if you have a particular count from the end that you want to re-use on multiple arrays.

Index Is Simple

The index type is actually really simple. If you take a look at the source code, you’ll see really all it is, is an integer value held inside a struct wrapper.

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Current State Of Play

So first off.. Nullable reference types? But.. We’ve been taught for the past 17 years that a big part of “reference” types is that they are nullable. That’s obviously a simplification, but a junior programmer would typically tell you that “value type” => Not Nullable and “reference type” => Nullable.

Infact we have syntax to tell us whether a value type can actually be nullable, for example :

And remember this syntax isn’t just for primitive value types. For example this works just as well :

But now we want to have a piece of syntax to say that this should be a compile time error (Or warning)? Interesting.


So the first question to ask is. Why do we need this? Consider a program like so :

Fairly simple. Notably I’ve left a comment to illustrate that maybe this program is a lot bigger than shown here, but everything should run fine and dandy. Now let’s say after some time, another developer comes along and does something like so  :

It may seem silly at first, but I’ve seen it happen. Someone has come along and set myClass to null to satisfy something they are working on. It’s buried deep in the program, and maybe within their tests, everything looks good.

But at some point, maybe with the perfect storm of conditions, we are going to throw a dreaded NullReferenceException. Naturally, we are going to do something like this and wrap our method call in a null check :

But if I’m the original developer I may say to myself… I never intended this thing to ever be null. But it would be fruitless to say that I never want anything to be set to null. Ever. We would probably break that rule on day 1. But at the same time, are we expecting to code so defensively that we do a null check everytime we use a reference type?

And this is the crux of the nullable reference type. It’s about signally intent in code. I want to specifically say “I expect this to be nullable at some point” or vice versa so that other developers, future myself included, don’t fall into a null reference trap.

Turning On Nullable Reference Types

As noted above, the first thing is that you need to be using C# 8 to turn on the Nullable Reference Types feature. Once that’s done, you need to add a single line to your project’s csproj file :

And that’s it!

Warnings To Expect

Once we’ve turned on the feature, let’s look at a simple piece of code to illustrate what’s going on.

Building this we get two Warnings. I put that in bold because they are Warnings and not compile errors. Your project will still build and run, but it’s just going to warn you about shooting yourself in the foot. The first warning is for us trying to assign null to a variable that isn’t explicitly set to allow nulls.

And the second warning is when we are trying to actually use the non-nullable type and the compiler thinks it’s going to be null.

So both of these aren’t going to stop our application from running (yet, more on that later), but it is going to warn us we could be in trouble.

Let’s instead change our variable to be nullable. I was actually a little worried that the syntax would be reversed. And instead of adding something extra to say something is nullable, I thought they might try and say all reference types are nullable by default, but you can now wrap it in something like  NotNullable<MyClass> myClass... . But thankfully, we are using the same syntax we use to mark a value type as nullable :

Interestingly, if we try and compile this, we still get the  Possible dereference  warning! So now it’s not just about hey this thing is not supposed to be nullable, but you are assigning null, but it’s proactively warning us that hey, you aren’t actually checking if this thing is null and it very well could be. To get rid of it, we need to wrap our stock standard null check.

And just like that, our warnings disappear.

Compiler Warning Limits

The thing is, reference types can be passed around between methods, classes, even entire assemblies. So when throwing up the warnings, it’s not foolproof. For example if we have code that looks like :

We do get a warning that we are assigning null. But we don’t get the Possible Dereference warning. This we can assume that once the object is passed outside the method, whatever happens outside there (Like setting null), we aren’t going to be warned about. But if we assign null so blatantly in the same block of code/method, and then try and use it, then the compiler will try and give us a helping hand.

For comparisons sake, this does generate a warning :

So it’s not just about whether it’s certain, but just whether it’s atleast possible within the current code block.

Nullable Reference Types On Hard Mode

Not happy with just a warning. Well you can always level up and try nullable reference types on hard mode! Simply add the following to your project’s csproj file :

Note that this will treat *all* warnings as errors, not just ones regarding null reference issues. But it means your project will no longer compile if there are warnings being thrown up!

Final Note

There is so much in here to like, I thought I would just riff off a few bullet points about how I feel about the change.

  • This reminds me so much of the In Keyword that was introduced in C# 7.2. That had a few performance benefits, but actually the main thing was about showing the “intent” of the developer. We have a similar thing here.
  • Even if you aren’t quite sure on the feature, it’s almost worth banging it on and checking which  Possible dereference  warnings get thrown up on an existing project. It’s a one line change that can easily be toggled on and off.
  • I like the fact it’s warnings and not errors. It makes it so transitioning a project to use it isn’t some mammoth task. If you are OK with warnings showing up over the course of a month while you work through them ofcourse!
  • There was rumblings of this feature being on by default when you start a new project in C#. That didn’t happen this release, but it could happen in the future. So it’s worth atleast understanding how things work because you very well may end up on a project in the future that uses it.

What do you think? Drop a comment below and let me know!

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Introduction To Ranges

Let’s first just show some code and from there, we can iterate on it trying a few different things.

Our starting code looks like :

So essentially we are looking to go from “index” 1 – 3 in our list and output the item. Unsurprisingly when we run this code we get

But let’s say we don’t want to use a for loop, and instead we want to use this new fandangle thing called “ranges”. We could re-write our code like :

And when we run it?

Uh oh… We have one less item than we thought. We’ve ran into our first gotcha when using Ranges. The start of a range is inclusive, but the end of a range is exclusive.

If we change our code to :

Then we get the expected result.

Shortcut Ranges

Using Ranges where it’s “From this index to this index” are pretty handy. But what about “From this index to the end”. Well Ranges have you covered!

From An Index (Inclusive) To The End (Inclusive)

Output :

From The Start (Inclusive) To An Index (Exclusive)

Output :

Entire Range (Inclusive)

Output :

From Index (Inclusive) To X From The End (Inclusive)

This one deserves a quick description. Essentially we have a new operator ^  that is now used to designate “From the end”. I labelled this one as “Inclusive” but it really depends on how you think about “from the end”. In my mind in a list of 5 items ^1  would refer to the 4th item. So if that’s included in the result, it’s inclusive.

Output :

Range As A Type

When we write 1..4, it looks like we are dealing a couple of integers in a new special format, but actually this is almost the initialisation syntax for a range. Similar how we might use {“1”, “2”, “3”} to illustrate us creating an array or list.

We can actually rip out the range from here, and create a new type to be passed around.

This could be handy to pass around into a method or re-use multiple times. I think it’s pretty close to using something like the System.Drawing.Point  object to specify X and Y values rather than having two separate values passed around.

A New Substring

One pretty handy thing about ranges is that they can be used as a faster way to do String.Substring operations. For example :

Would output 234 . Obviously this would be a slight shift in thinking because string.Substring isn’t “From this index to this Index” but instead “From this index count this many characters”.

Who Uses Arrays Anyway? What About List<T>?

So when I first had a play with this. I couldn’t work out what I was doing wrong… As it turns out. It was because I was using a List<T>  instead of an Array. I don’t to be too presumptuous but I would say the majority of times you see a “list” or “set” or data in a business application, it’s going to be of type List<T> . And well.. It’s not available to use Ranges. I also can’t see any talk of it being made either (Possibly because of the linked list nature of List<T> Apparently List<T> is not a linkedlist at all! So then I’m not sure why Ranges are not available). Drop a comment below on your thoughts!

I’ve previously done posts on how to setup both .NET Core 2.1 and .NET Core 2.2. The reason for these posts was because it was still a little shaky exactly which version of the SDK you needed to actually build for these platforms. e.x. If you had .NET Core SDK 2.1, it didn’t actually mean you could build .NET Core 2.1 projects (How infuriating is that?!).

But things have gotten better and now it’s usually just a matter of installing the latest SDK and away you go, so there isn’t really much I could normally write for each version bump. However! .NET Core 3 includes a couple of really important updates that I want to talk about in the future. Windows Forms/WPF development on top of .NET Core, and C# 8. Because of this, I wanted to have a post all written up on setting up your machine for .NET Core 3 development while things are in preview so I don’t have to repeat myself every future post!

Installing the .NET Core 3 Preview SDK

Installing the .NET Core 3 SDK is fairly straight forward. Just head to the download page and grab the SDK (Not the runtime) and install! You do want to ensure you are on the .NET Core 3 page and not the general download page of .NET Core as the current “live” version is 2.2 not 3.

Once installed, you should be able to open a command prompt and run  dotnet --info with the output being something close to :

Of course as long as the version is 3+, you should be good to go.

It’s also important to note that your default SDK on your machine will now be 3 and not 2.2 (Or the previous version you had installed). If you have existing projects that are using this SDK version, you will need to create a new global.json file to be able to run different SDK’s side by side. Luckily there is a great guide on how to do this right here!

Using VS Code

If you like programming in VS Code, then you’re actually good to write .NET Core 3 code right from the get go (Since it’s just a text editor). Obviously you will want to install the “C#” extension if you want all the good intellisense (Including for C# 8 new types), but other than that because builds are all from the command line there isn’t too much to it.

If you want to use C# 8 features in VS Code, see that section of this article below (It’s the same for Visual Studio or VS Code).

Using Visual Studio

A preview version of .NET Core also has a preview version of Visual Studio to accompany it. This can be a little annoying because it’s not just jumping on the “preview” update track for your existing Visual Studio installation, you actually need to download and install a completely new version of Visual Studio. This means another ~6GB of space taken up with a VS version you may use once or twice to try out new features.

I feel the need to point this out because it caught me out initially. The first version of Visual Studio to support .NET Core 3.0 is Visual Studio 2019 Preview 1. Emphasise on the 2019 part because I was wondering why my 2017 preview Visual Studio wasn’t working at first!

You can grab the preview version of Visual Studio here :

I have actually dabbled with a solo installation of the preview version only and not bothered with the current release version. Theoretically it could have a few more bugs in it than the current supported release, but so far so good!

Enabling C# 8 Features

We can enable C# 8 on a project by project basis by editing our .csproj files. Simply open up your project file, and add the following anywhere within your <Project> node :

And that’s literally it (Too easy!).

Some people get trapped thinking that the following will enable C# 8 features :

But remember, version 8 of the language is in preview too. Latest refers to the latest minor version that has actually been released (So at this time, 7.3). If you try and use new features of C# 8, for example the “Range” type, you will get an error like so :

error CS8370: Feature ‘range operator’ is not available in C# 7.3. Please use language version 8.0 or greater.

But other than that, everything is pretty straight forward! Now go ahead and start using Range, AsyncEnumerable, Indices and Nullable Reference Types!