Hosted Services in the .NET Core world mean background tasks in everyday developer terms. If you’re living in the C# world, and even the Azure world, you actually already have a couple of options for doing background style tasks. Problems that you can solve using Hosted Services are probably similar to the problems you currently solve using Windows Services or Azure WebJobs. So first, let’s take a look at some comparisons.

Hosted Services vs WebJobs

WebJobs are similar to hosted services in that they run on the same machine as an Azure Website – thus sharing resources. WebJobs do have the added benefit of being able to be deployed separately (Hosted services are part of the website) and have additional functionality when it comes to running as a singleton. With Hosted Services, there is an instance running of that hosted service for every deployment of your website which can be an issue if you only want one instance of that “process” running at anytime. You can program around this by creating your own locking mechanism, but obviously webjobs gets this out of the box. The one other difference really is that with the WebJobs SDK, you get things like queue/blob triggers right away. Inside a ASP.NET Core Hosted Service, you would need to write all of this manually.

Hosted Services vs Windows Services

Windows Services are typically hosted on other infrastructure that isn’t also hosting your website. They can be deployed independently, and don’t have really any tie in to your website. But that also comes as a negative. If you are using PAAS on something like Azure or Google Cloud, you would then need a separate VM to hosted your Windows Service. Not too great!

Hello World Hosted Service

Create a new class in your .NET Core Web Project called “HelloWorldHostedService”. This is going to be your first piece of code for your Hosted Service :

So all we are doing is starting a timer that every 10 seconds, prints out “Hello World!” to the debug output.

You’ll notice that we have a start and stop, probably pretty similar to a windows service where the StartAsync method is when our web process kicks off. It shouldn’t be where we do work, but instead where we setup side threads to do the work. And stop is where our web process stops, so we obviously have to pause our timer.

You’ll also need the following line, in your ConfigureServices method, in your startup.cs.

And that’s it, that’s your first hosted service. If you run your web project, your service should kick off (Either set a breakpoint or check the Debug output), and away you go!

Background Service Helper

The thing is, each time you write a background service, it’s likely to be the same sort of process. It’s probably going to either be something on a timer, or in an infinite loop. So 9 times out of 10, you’re going to be using some sort of boiler plate that kicks off your task on a different thread, and handles cancellation tokens etc.

Well Microsoft foresaw this, and create an abstract helper class called BackgroundService. We can rewrite our above code to instead look like :

Notice that we now inherit from BackgroundService and not from IHostedService. But our startup.cs call doesn’t change.

Behind the scenes, the code for BackgroundService looks like (And you can see the full source on Github here) :

So it’s actually pretty simple. It’s just kicking off a thread with your work, and waiting for it to finish (If it finishes). It’s handling the cancellation tokens for you, and a bit of cleanup.

Good Examples Hosted Services

I thought I would end on what makes a good hosted service. It should be something preferably specific to that instance of the website. If you scale out horizontally, each instance of the hosted service shouldn’t compete with each other and cause issues. So things like clearing a shared cache, moving files/data, ETL processes etc may not be great candidates for hosted services because each service will be competing with each other.

An example of a service I wrote recently was that I built an API that read data from a secondary replica database. I didn’t mind if the replica fell behind in updates from the primary, but it still couldn’t fall too far (e.g. More than 15 minutes). If it fell more than 15 minutes, I wanted to force my API to instead read from the primary. Each website checking the secondary status and swapping to read from primary is very scalable because the check itself is atomic and no matter how many times it’s called, it doesn’t interfere with each other.

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.


With IOT becoming bigger and bigger, it makes sense for C# to add a way to iterate over an IEnumerable in an async way while using the yield keyword to get data as it comes in. For example retrieving data signals from an IOT box, we would want to be receiving data and processing it as it is retrieved, but not in a way that blocks CPU while we wait. This is where IAsyncEnumerable comes in!

But We Already Have Async Enumerable Right?

So a common trap to fall into might be that you want to use a return type of Task<IEnumerable<T>>  and make it async. Something like so :

So if we ran this application, what would be the result? Well nothing would appear for 10 seconds, and then we would see all of our datapoints all at once. It’s not going to be thread blocking, it’s still async, but we don’t get the data as soon as we receive it. It’s less IEnumerable and more like a List<T>.

What we really want, is to be able to use the yield  keyword, to return data as we receive it to be processed immediately.

Using Yield With IAsyncEnumerable

So knowing that we want to use yield , we can actually use the new interface in C# 8 called IAsyncEnumerable<T> . Here’s some code that does just that :

So some pointers about this code :

  • Notice that we await our foreach loop itself rather than awaiting the FetchIOTData method call within the foreach loop.
  • We are returning a type of IAsyncEnumerable<T> and not IEnumerable<T>

Other than that, the code should be rather straight forward.

When we run this, instead of 10 seconds of nothing and then all data dumped on us, we get each piece of data as it comes. Ontop of this, the call is still not blocking.

Early Adopters Bonus

If you are using this in late 2018 or early 2019, you’re probably going to have fun trying to get all of this to compile. Common compile errors include :

As detailed in this Github issue coreclr repo, there is a mismatch between the compiler and the library.  You can get around this by copy and pasting this code block into your project until everything is back in sync again.

There is even another issue where iteration will stop early (Wow!). It looks like it will be fixed in Preview 2. I couldn’t replicate this in my particular scenarios, but just know that it might happen.

Both of these issues maybe point to it being early days and not quite anything but proof of concept yet.

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Have You Seen Ranges Yet?

The Index struct is pretty similar to ranges which also made it into C#. If you haven’t already, go check out the article on Ranges in C# 8 here.

It’s not a pre-requisite to know ranges before you use the new Index type, but it’s probably a pretty good primer and they share the new “hat” ^ character.


If you wanted to get the last item of an array, you would typically see code like this :

If you wanted second to last you would end up doing something like :

If you have a List<T>, you would have a couple of different options, mostly involving Linq. So something like this :

But with arrays, what we notice is that we are still always counting from the front. So we are passing in that we want index 2, not that we want the last index persay.

With C# 8 however, we have the ability to do this :

So we can pass in the “hat” character, and this signifies that we are looking for the index X away from the end of the array. You’ll also notice something very important. Whereas junior developers have struggled since the dawn of time to wrap their head around arrays starting at index “0”. When we use the hat character, we are saying we want to count from the end but inclusive. So saying ^1 means the last index, not the second to last index. Weird I know.

And that’s the new index character in a nutshell!

Index As A Type

When you say ^1, you are actually creating a new struct in C# 8. That struct being Index. For example :

All of these create a new struct of Index that hold a value of the last index item.

You might want to use it in this way if you have a particular count from the end that you want to re-use on multiple arrays.

Index Is Simple

The index type is actually really simple. If you take a look at the source code, you’ll see really all it is, is an integer value held inside a struct wrapper.

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Current State Of Play

So first off.. Nullable reference types? But.. We’ve been taught for the past 17 years that a big part of “reference” types is that they are nullable. That’s obviously a simplification, but a junior programmer would typically tell you that “value type” => Not Nullable and “reference type” => Nullable.

Infact we have syntax to tell us whether a value type can actually be nullable, for example :

And remember this syntax isn’t just for primitive value types. For example this works just as well :

But now we want to have a piece of syntax to say that this should be a compile time error (Or warning)? Interesting.


So the first question to ask is. Why do we need this? Consider a program like so :

Fairly simple. Notably I’ve left a comment to illustrate that maybe this program is a lot bigger than shown here, but everything should run fine and dandy. Now let’s say after some time, another developer comes along and does something like so  :

It may seem silly at first, but I’ve seen it happen. Someone has come along and set myClass to null to satisfy something they are working on. It’s buried deep in the program, and maybe within their tests, everything looks good.

But at some point, maybe with the perfect storm of conditions, we are going to throw a dreaded NullReferenceException. Naturally, we are going to do something like this and wrap our method call in a null check :

But if I’m the original developer I may say to myself… I never intended this thing to ever be null. But it would be fruitless to say that I never want anything to be set to null. Ever. We would probably break that rule on day 1. But at the same time, are we expecting to code so defensively that we do a null check everytime we use a reference type?

And this is the crux of the nullable reference type. It’s about signally intent in code. I want to specifically say “I expect this to be nullable at some point” or vice versa so that other developers, future myself included, don’t fall into a null reference trap.

Turning On Nullable Reference Types

As noted above, the first thing is that you need to be using C# 8 to turn on the Nullable Reference Types feature. Once that’s done, you need to add a single line to your project’s csproj file :

And that’s it!

Warnings To Expect

Once we’ve turned on the feature, let’s look at a simple piece of code to illustrate what’s going on.

Building this we get two Warnings. I put that in bold because they are Warnings and not compile errors. Your project will still build and run, but it’s just going to warn you about shooting yourself in the foot. The first warning is for us trying to assign null to a variable that isn’t explicitly set to allow nulls.

And the second warning is when we are trying to actually use the non-nullable type and the compiler thinks it’s going to be null.

So both of these aren’t going to stop our application from running (yet, more on that later), but it is going to warn us we could be in trouble.

Let’s instead change our variable to be nullable. I was actually a little worried that the syntax would be reversed. And instead of adding something extra to say something is nullable, I thought they might try and say all reference types are nullable by default, but you can now wrap it in something like  NotNullable<MyClass> myClass... . But thankfully, we are using the same syntax we use to mark a value type as nullable :

Interestingly, if we try and compile this, we still get the  Possible dereference  warning! So now it’s not just about hey this thing is not supposed to be nullable, but you are assigning null, but it’s proactively warning us that hey, you aren’t actually checking if this thing is null and it very well could be. To get rid of it, we need to wrap our stock standard null check.

And just like that, our warnings disappear.

Compiler Warning Limits

The thing is, reference types can be passed around between methods, classes, even entire assemblies. So when throwing up the warnings, it’s not foolproof. For example if we have code that looks like :

We do get a warning that we are assigning null. But we don’t get the Possible Dereference warning. This we can assume that once the object is passed outside the method, whatever happens outside there (Like setting null), we aren’t going to be warned about. But if we assign null so blatantly in the same block of code/method, and then try and use it, then the compiler will try and give us a helping hand.

For comparisons sake, this does generate a warning :

So it’s not just about whether it’s certain, but just whether it’s atleast possible within the current code block.

Nullable Reference Types On Hard Mode

Not happy with just a warning. Well you can always level up and try nullable reference types on hard mode! Simply add the following to your project’s csproj file :

Note that this will treat *all* warnings as errors, not just ones regarding null reference issues. But it means your project will no longer compile if there are warnings being thrown up!

Final Note

There is so much in here to like, I thought I would just riff off a few bullet points about how I feel about the change.

  • This reminds me so much of the In Keyword that was introduced in C# 7.2. That had a few performance benefits, but actually the main thing was about showing the “intent” of the developer. We have a similar thing here.
  • Even if you aren’t quite sure on the feature, it’s almost worth banging it on and checking which  Possible dereference  warnings get thrown up on an existing project. It’s a one line change that can easily be toggled on and off.
  • I like the fact it’s warnings and not errors. It makes it so transitioning a project to use it isn’t some mammoth task. If you are OK with warnings showing up over the course of a month while you work through them ofcourse!
  • There was rumblings of this feature being on by default when you start a new project in C#. That didn’t happen this release, but it could happen in the future. So it’s worth atleast understanding how things work because you very well may end up on a project in the future that uses it.

What do you think? Drop a comment below and let me know!

Getting Setup With C# 8

If you aren’t sure if you are using C# 8, or you know you aren’t and want to know how to access these features. Read this quick guide on getting setup with .NET Core and C# 8.

Introduction To Ranges

Let’s first just show some code and from there, we can iterate on it trying a few different things.

Our starting code looks like :

So essentially we are looking to go from “index” 1 – 3 in our list and output the item. Unsurprisingly when we run this code we get

But let’s say we don’t want to use a for loop, and instead we want to use this new fandangle thing called “ranges”. We could re-write our code like :

And when we run it?

Uh oh… We have one less item than we thought. We’ve ran into our first gotcha when using Ranges. The start of a range is inclusive, but the end of a range is exclusive.

If we change our code to :

Then we get the expected result.

Shortcut Ranges

Using Ranges where it’s “From this index to this index” are pretty handy. But what about “From this index to the end”. Well Ranges have you covered!

From An Index (Inclusive) To The End (Inclusive)

Output :

From The Start (Inclusive) To An Index (Exclusive)

Output :

Entire Range (Inclusive)

Output :

From Index (Inclusive) To X From The End (Inclusive)

This one deserves a quick description. Essentially we have a new operator ^  that is now used to designate “From the end”. I labelled this one as “Inclusive” but it really depends on how you think about “from the end”. In my mind in a list of 5 items ^1  would refer to the 4th item. So if that’s included in the result, it’s inclusive.

Output :

Range As A Type

When we write 1..4, it looks like we are dealing a couple of integers in a new special format, but actually this is almost the initialisation syntax for a range. Similar how we might use {“1”, “2”, “3”} to illustrate us creating an array or list.

We can actually rip out the range from here, and create a new type to be passed around.

This could be handy to pass around into a method or re-use multiple times. I think it’s pretty close to using something like the System.Drawing.Point  object to specify X and Y values rather than having two separate values passed around.

A New Substring

One pretty handy thing about ranges is that they can be used as a faster way to do String.Substring operations. For example :

Would output 234 . Obviously this would be a slight shift in thinking because string.Substring isn’t “From this index to this Index” but instead “From this index count this many characters”.

Who Uses Arrays Anyway? What About List<T>?

So when I first had a play with this. I couldn’t work out what I was doing wrong… As it turns out. It was because I was using a List<T>  instead of an Array. I don’t to be too presumptuous but I would say the majority of times you see a “list” or “set” or data in a business application, it’s going to be of type List<T> . And well.. It’s not available to use Ranges. I also can’t see any talk of it being made either (Possibly because of the linked list nature of List<T> Apparently List<T> is not a linkedlist at all! So then I’m not sure why Ranges are not available). Drop a comment below on your thoughts!

This is part 5 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles
Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

Where Are We Up To?

Thus far, we’ve mostly been playing around with what amounts to a console app and running it in Azure. If anything it’s just been a tidy environment for running little utilities that we don’t want to spin up a whole VM for. But if you’ve used WebJobs using the full .NET Framework before, you know there are actually libraries to get even more out of Web Jobs. Previously this wasn’t available for .NET Core applications, until now!

Installing The WebJob Packages

Go ahead and create stock standard .NET Core console application for us to test things out on.

The first thing we need to do with our new app is install a nuget package from Microsoft that opens up quite a few possibilities.

Note that this needs to be atleast version 3.0.0. At the time of writing this is version 3.0.2 so you shouldn’t have any issues, but just incase you are trying to migrate an old project.

Now something that changed from version 2.X to 3.X of the library is that you now need to reference a second library called “Microsoft.Azure.WebJobs.Extensions”. This is actually pretty infuriating because no where in any documentation does it talk about this. There is this “helpful” message on the official docs :

The instructions tell how to create a WebJobs SDK version 2.x project. The latest version of the WebJobs SDK is 3.x, but it is currently in preview and this article doesn’t have instructions for that version yet.

So it was only through trial and error did I actually get things up and running. So you will want to run :

I honestly have no idea what exactly you could do without this as it seemed like literally every single trigger/webjob function I tried would not work without this library. But alas.

Building Our Minimal Timer WebJob

I just want to stress how different version 3.X of WebJobs is to 2.X. It feels almost like a complete rework, and if you are trying to come from an older version, you may think some of this looks totally funky (And that’s because it is a bit). But just try it out and have a play and see what you think.

First thing we want to do is create a class that holds some of our WebJob logic. My code looks like the following :

So a couple of notes :

  • The Singleton attribute just says that there should only ever be one of this particular WebJob method running at any one time. e.g. If I scale out, it should still only have one instance running.
  • The “TimerTrigger” defines how often this should be run. In my case, once a minute.
  • Then I’m just writing out the current time to see that we are running.

In our Main method of our console application, we want to add a new Host builder object and start our WebJob. The code for that looks like so :

If you’ve used WebJobs before, you’re probably used to using the JobHostConfiguration  class to do all your configuration. That’s now gone and replaced with the HostBuilder . Chaining is all the rage these days in the .NET Core world so it looks like WebJobs have been given the same treatment.

Now you’ll notice that I’ve added a a call to the chain called AddAzureStorageCoreServices()  which probably looks a little weird given we aren’t using anything with Azure at all right? We are just trying to run a hello world on a timer. Well Microsoft says bollox to that and you must use Azure Blob Storage to store logs and a few other things. You see when you use Singleton (Or just in general what you are doing needs to run on a single instance), it uses Azure Blob Storage to create a lock so no other instance can run. It’s smart, but also sort of annoying if you really don’t care and want to throw something up fast. It’s why I always walk people through creating a Web Job *without* this library, because it starts adding a whole heap of things on top that just confuse what you are trying to do.

Anyway, you will need to create an appsettings.json file in the root of your project (Ensure it’s set to Copy Always to your output directory). It should contain (atleast) the following :

For the sake of people banging their heads against the wall with this and searching Google. Here’s something to help them.

If you are getting the following error :

Unhandled Exception: System.InvalidOperationException: Unable to resolve service for type ‘Microsoft.Azure.WebJobs.DistributedLockManagerContainerProvider’ while attempting to activate ‘Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor’.

This means that you haven’t added the call to AddAzureStorageCoreServices()

If you are instead getting the following error :

Unhandled Exception: Microsoft.Azure.WebJobs.Host.Listeners.FunctionListenerException: The listener for function ‘SayHelloWebJob.TimerTick’ was unable to start. —> System.ArgumentNullException: Value cannot be null.
Parameter name: connectionString
at Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(String connectionString)

This means that you did add the call to add the services, but it can’t find the appsetting for the connection strings. First check that you have them formatted correctly in your appsettings file, then ensure that your appsettings is actually being output to your publish directory correctly.

Moving on!

In the root of our project, we want to create a run.cmd  file to kick off our WebJob. We went over this in Part 1 of this series, so if you need a reminder feel free to go back.

I’m just going to go ahead and Publish/Zip my project up to Azure (Again, this was covered in Part 1 incase you need help!). And what do you know!

Taking This Further

At this point, so much of this new way of doing WebJobs is simply trial and error. I had to guess and bumble my way through so much just to get to this point. As I talked about earlier, the documentation is way behind this version of the library which is actually pretty damn annoying. Especially when the current release on Nuget is version 3.X, and yet there is no documentation to back it up.

One super handy tool I used was to look at the Github Repo for the Azure Webjobs SDK. Most notably, there is a sample application that you can go and pick pieces of out of to get you moving along.

What’s Next?

To be honest, this was always going to be my final post in this series. Once you reach this point, you should already be pretty knowledgeable about WebJobs, and through the SDK, you can just super power anything you are working on.

But watch this space! I may just throw out a helpful tips post in the future of all the other annoying things I found about the SDK and things I wished I had known sooner!

I came across an interesting problem while developing a monitoring tool recently. It was a simple HTTP check to see if a webpage had a certain piece of text on it. When it didn’t, I was saving the HTML and I had to sift through it at a later date to try and work out what went wrong. At some point, reading HTML (Or loading the HTML without any CSS/JS) become really tiresome so what I really wanted was the ability to take a screenshot of a webpage and save it as an image.

Boy. What a ride that took me on. After hours of searching, It felt like I had two options. Either I could use a dedicated paid library to save HTML to Image, or I could use many HTML to Image command line utilities – I would just have to invoke it from C# code. Neither really appealed a hell of a lot.

While explaining the problem to a friend, the conversation went a bit like this :

Friend : Do you need to manipulate the page in any way? Like click a link or something?
Me : No I just need to do a get request for a page and save the result, that’s it. 
Friend : Oh OK. I was going to say, you could use the Selenium part for clicking around the page if you needed it. Anyway, good luck!

It took a few seconds to register but… Of course… Selenium can take screenshots of webpages! After all it’s just browser automation anyway! Admittedly it’s kinda heavy for just taking screenshots, but for my nice little utility I really didn’t care. And actually the ability to extend the code to be able to then *do* things on the page at a later date probably only makes it more of an attractive option.

Installing Selenium Into Your Project

We’re going to need two packages. The first is mandatory and that is the Selenium.Support package. Run the following from your package manager command line :

Next we want to install the driver for the browser we want to use. So this is going to totally depend on which browser you want to do the browsing. For me I want to use Chrome, but you can also choose to use Firefox or PhantomJS etc so modify the following command for what you need.

Our Selenium Screenshot Code

First I’ll give you the code, then we can talk it through.

Taking it step by step, it’s actually really easy (Almost too easy).

  • Our options with Headless tells chrome that we don’t want to actually see the chrome window. Mostly because it’s annoying having it pop up all the time, but headless also sometimes causes problems with screenshots (More on this later).
  • We create our ChomeDriver with a very specific path, this is because without it, we get this weird error which again, I’ll talk about shortly.
  • Navigate to the page we want
  • Take a screenshot
  • Save it down as a file (We can also get a byte array at this point if we want to instead push it to cloud storage etc).

And that’s literally it. You can now take screenshots of webpages! But now onto the edge cases.

Unable To Find ChromeDriver Exception

So initially I had the following exception :

The chromedriver.exe file does not exist in the current directory or in a directory on the PATH environment variable.

It turned out to be a two pronged fix.

The first is that we need to make sure we have installed the actual ChromeDriver nuget package *and* rebuilt our solution completely. The nuget package command is :

This is actually a little confusing because the ChromeDriver class is available in code, intellisense and all, but it won’t run unless you install that package.

Next for some reason it couldn’t find the ChromeDriver.exe in my applications bin folder still. What seemed to fix it was the fact you can give it a bit of a nudge in the right direction, and so telling it to look in the current application folder (e.g. the Bin folder) got us there.

Screenshot Is Off Screen

While testing this out I noticed that when I swapped to using the headless chrome browser, that sometimes what I wanted to view was just off screen.  While there was actually some nifty libraries to take just a screenshot of a single element in Selenium, I had a cruder (but simpler) solution!

If we can just make the window a reasonable width, but really long, then we can take a screenshot of the “browser window”, and it should capture everything. Obviously this isn’t an exact science and if the length of your page is forever changing, this may not work. But if it’s static content, just a little longer, this works perfectly.

The code looks a bit like :

So now as long as our content is no longer than 2000px long (In my case, it will never be), then we are set. Again, there’s probably more elegant solutions but this works so well for what I need it to do.

Now the title of this post is probably a bit of a mouthful and maybe isn’t what you think. This isn’t about overriding the default settings of, it’s about overriding *back* to default. That probably doesn’t make a heck of a lot of sense, but hopefully by the end of this post it will!

Returning Enums as Strings In Core

If you are returning an enum from an Core Web project, by default it’s going to return as the integer representation.

So for example if we have a model that contains an enum that looks like this :

This model will (by default) be serialized looking like so :

Which makes sense. It’s better that we return the integer values in most cases because these become the immutable “key” values, whereas the name of the actual enum can change.

However in some cases, we may want to return a string value. This could be because you have a pesky Javascript/IOS developer who wants string values and won’t budge, or maybe you just prefer it that way! In either case, there is a way to override the model serialization on a per class basis.

All we have to do is decorate our property with a special attribute.

You will require two using statements to go along with this one :

What this does is tell (Which as of writing, is the default JSON serializer of .NET Core), to serialize this particular property using the StringEnumConverter. Which among other things, can just use the string representation of an enum. If we serialize this model now, we get :

But the problem we now have is that we have to go and edit every single property that uses an enum and tell it to use the string representation. If we know we want it in every case across the board, then we can actually tell to use the StringEnumConverter everytime.

Head over to our startup.cs and find the ConfigureServices method. In there we should already see a call to AddMvc(), and we are just going to tack onto the end of this.

Here we are basically saying that here’s a converter you can use anytime it’s applicable (Which is basically for every Enum), so go ahead and use it.

The Problem (And The Fix)

If we have been humming along serializing enums as strings, we might end up in a position where we actually do want to return an enum as an integer. A prime example of this (At least it was for me), is returning error codes. I want the codes to be represented in C#  as enums as it’s much easier to wrangle, but be serialized out as simple numeric code.

So what we now want to do is override our “new default” of StringEnumConverter, and go back to the old way of doing things. So how do we do that? Well actually we can’t. As crazy as that probably sounds, I couldn’t find any way  to say “Please use the default converter instead of that other one I gave you”.

Thinking that I was going to have to write my own custom converter just to cast back to an int, I came across an interesting little piece of documentation. It came of the form of the “CanWrite” property of a custom converter. The documentation of which is here :

Note that if this is set to false, it’s saying that the JsonConverter cannot write the JSON. So what is it going to fall back to? As it turns out, the default converter for that type.

So all we need to do is whip up a custom JSON converter that does absolutely nothing :

The NotImplemented methods are ones that I must have when I inherit from the base class. But because I set CanRead and CanWrite to false, they are never actually called.

Now if we decorate our enum property with this converter (Essentially forcing it to use no other custom converters even if they are in the setup list in our startup.cs)

Serializing this model again we get :


This is part 4 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles
Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

WebJob Scheduling For .NET Core?

So in this part of our tutorial on Web Jobs, we are going to be looking at how we can set WebJobs on schedules for .NET Core. Now I just want to emphasize that this part really isn’t really too .NET Core specific, infact you can use these exact steps to run any executable as a Web Job on a schedule. I just felt like when I was getting up and running, that it was sort of helpful to understand how I could get small little “batch” jobs to run on a schedule in the simplest way possible.

If you feel like you already know all there is about scheduling jobs, then you can skip this part altogether!

Setting WebJob Schedule via Azure Portal

So even though in our last post, we were deploying our WebJob as part of our Web Application, let’s take a step back and pretend that we are still uploading a nice little raw executable via the Azure Portal (For steps on how to make that happen, refer back to Part 1 of this series).

When we go to upload our zip file, we are actually presented with the option to make things scheduled right from the get go.

All we need to do is make our “Type” of WebJob be triggered. As a side note, many people confuse this with “triggering” a WebJob through something like a queue message. It’s not quite the same. We’ll see this in a later post, but for now think of a “triggered” WebJob referring to either a “Manual” trigger, e.g. You click run inside the portal. Or “Scheduled” which is run every X minutes/hours/days etc.

Now our “CRON Expression” is like any other time you’ve used CRONs. Never used them before? Well think of it like a string of numbers that tells a computer how often something should run. You’ll typically see this in Linux systems (Windows Task Scheduler for example is more GUI based to set schedules). Here’s a great guide to understanding CRON expressions :

A big big word of warning. While many systems only allow CRON expressions down to the minute, Azure allows CRON syntax down to the second. So there will be 6 parts to the CRON instead of 5 just incase you can’t work out why it’s not accepting your expression. This is also pretty important so you don’t overwhelm your site thinking that your batch job is going to run once a minute when really it goes crazy at once a second.

Once created, our application will run on our schedule like clockwork!

Editing An Existing WebJob Schedule via Azure Portal

So about editing the schedule of a WebJob in the portal… Well.. You can’t. Annoyingly there is no way via the portal GUI to actually edit the schedule of an existing WebJob. Probably even more frustratingly there is not even a way to stop a scheduled WebJob from executing. So if you imagine that you accidentally set something to run once a second and not once a minute, or maybe your WebJob is going off the rails and you want to stop it immediately to investigate, you can’t without deleting the entire WebJob.

Or so Azure wants you to think!

Luckily we have Kudu to the rescue!

You should be able to navigate to  D:\home\site\wwwroot\App_Data\jobs\triggered\{YourWebJobName}\publish  via Kudu and edit a couple of files. Note that this is *not* the same as  D:\home\data\jobs\triggered . The data folder is instead for logs and other junk.

Anyway, once inside the publish folder of your WebJob, we are looking for a file called “settings.job”. The contents of which will look a bit like this :

This should obviously look familiar, it’s our CRON syntax from before! This is actually how Azure stores our CRON setting when we initially upload our zip. And what do you know, editing this file will update our job to run on the updated schedule! Perfect.

But what about our run away WebJob that we actually wanted to stop? Well unfortunately it’s a bit of a hack but it works. We need to set the contents of our settings.job file to look like :

What is this doing? It’s saying please only run our job at 5AM on the 31st of February. The top of the class will note there is no such thing as the 31st of the February, so the WebJob will actually never run. As dirty as it feels, it’s the only way I’ve found to stop a scheduled WebJob from running (except of course to just delete the entire WebJob itself).

Uploading A WebJob With A Schedule As Part Of A Website Deploy

Sorry for the butchering of the title on this one, but you get the gist. If we are uploading our WebJob as part of our Website deploy, how do we upload it with our schedule already defined? We obviously don’t want to have to go through the portal or Kudu to edit the schedule every time.

A quick note first. You should already have done Part 3 of this series on WebJobs in .NET Core that explains how we can upload a WebJob as part of an Azure Website deploy. If you haven’t already, please read that post!

Back to deploying our scheduled job. All we do is add a settings.job file to the root of our WebJob project. Remember to set the file to “Copy If Newer” to ensure the file is copied when we publish.

The contents of this file will follow the same format as before. e.x. If we want to run our job once a minute :

Now importantly, remember from Part 3 when we wrote a PostPublish script to publish our WebJob to the App_Data folder of our Website? We had to edit the csproj of our Website. It looked a bit like this :

Now we actually need to change the folder for our scheduled WebJob to instead be pushed into our “triggered” folder. So the PostPublish script would look like :

Again I want to note that “triggered” in this context is only referring to jobs that are triggered via a schedule. Jobs that are triggered by queue messages, blob creation etc, are still continuous jobs. The job itself runs continuously, it’s just that a particular “method” in the program will trigger if a queue message comes in etc.

If you publish your website now, you’ll also deploy your WebJob along with it. Easy!

What’s Next?

So far, all our WebJobs have been simple .NET Core console applications that are being run within the WebJob system. The code itself actually doesn’t know that it’s a WebJob at all! But if you’ve ever created WebJobs using FullFramework, you know there are libraries for WebJobs that allow you to trigger WebJobs based on Queue messages, blobs, timers etc all from within code. Up until recently, these libraries weren’t ported to .NET Core, until now! Jump right into it now!

This is part 3 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles

Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

Deploying Within A Web Project

So far in this series we’ve been packaging up our WebJob as a stand alone service and deploying as a zip file. There’s two problems with this approach.

  • We are manually having to upload a zip to the Azure portal
  • 9 times out of 10, we just want to package the WebJob with the actual Website, and deploy them together to Azure

Solving the second issue actually solves the first too. Typically we will have our website deployment process all set up, whether that’s manual or via a build pipeline. If we can just package up our WebJob so it’s actually “part” of the website, then we don’t have to do anything special to deploy our WebJob to Azure.

If you’ve ever created an Azure WebJob in .NET Framework, you know in that eco system, you can just right click your web project and select “Add Existing Project As WebJob” and be done with it. Something like this :

Well, things aren’t quite that easy in the .NET Core world. Although I wouldn’t rule out this sort of integration into Visual Studio in the future, right now things are a little more difficult.

What we actually want to do is create a publish step that goes and builds our WebJob and places it into a particular folder in our Web project. Something we haven’t jumped into yet is that WebJobs are simply console apps that live inside the “App_Data” folder of our web project. Basically it’s a convention that we can make use of to “include” our WebJobs in the website deployment process.

Let’s say we have a solution that has a website, and a web job within it.

What we need to do is edit the csproj of our website project. We need to add something a little like the following anywhere within the <project> node :

What we are doing is saying when the web project is published, *after* publishing, also run the following command.

Our command is a dotnet publish command that calls publish on our webjob, and says to output it (signified by the -o flag) to a folder in the web projects output directory called “App_Data\Jobs\continuous\WebJobExample”.

Now a quick note on that output path. As mentioned earlier, WebJobs basically just live within the App_Data folder within a website. When we publish a website up to the cloud, Azure basically goes hunting inside these folders looking for webjobs to run. We don’t have to manually specify them in the portal.

A second thing to note is that while we are putting it in the “continuous” folder, you can also put jobs inside the “triggered” folder which are more for scheduled jobs. Don’t worry too much about this for now as we will be covering it in a later post, but it’s something to keep in mind.

Now on our *Website* project, we run a publish command :  dotnet publish -c Release . We can head over to our website output directory and check that our WebJob has been published to our web project into the App_Data folder.

At this point, you can deploy your website publish package to Azure however you like. I don’t want to get too in depth on how to deploy the website specifically because it’s less about the web job, and more about how you want your deploy pipeline to work. However below I’ll talk about a quick and easy way to get up and running if you need something to just play around with.

Deploying Your Website And WebJob With Publish Profiles

I have to say that this is by no means some enterprise level deployment pipeline. It’s just a quick and easy way to validate your WebJobs on Azure. If you are a one man band deploying a hobby project, this could well suit your needs if you aren’t deploying all that often. Let’s get going!

For reasons that I haven’t been able to work out yet, the csproj variables are totally different when publishing from Visual Studio rather than the command line. So we actually need to edit the .csproj of our web project a little before we start. Instead of :

We want :

So we remove the $(ProjectDir)  variable. The reason is that when we publish from the command line, the $(PublishDir)  variable is relative, whereas when we publish from Visual Studio it’s an absolute path. I tried working out how to do it within MSBuild and have conditional builds etc. But frankly, you are typically only ever going to build one way or the other, so pick whichever one works for you.

If you head to your Azure Web App, on the overview screen, you should have a bar running along the top. You want to select “Get Publish Profile” :

This will download a .publishsettings file to your local machine. We are going to use this to deploy our site shortly.

Inside Visual Studio. Right click your website project, and select the option to Publish. This should pop up a box where you can select how you want to publish your website. We will be clicking the button right down the bottom left hand corner to “Import Profile”. Go ahead and click it, and select the .publishsettings file you just downloaded.

Immediately Visual Studio will kick into gear and push your website (Along with your WebJob) into Azure.

Once completed, we can check that our website has been updated (Visual Studio should immediately open a browser window with your new website), but on top of that we can validate our WebJob has been updated too. If we open up the Azure Portal for our Web App, and scroll down to the WebJob section, we should see the following :

Great! We managed to publish our WebJob up to Azure, but do it in a way that it just goes seamlessly along with our website too. As I mentioned earlier, this isn’t some high level stuff that you want to be doing on a daily basis for large projects, but it works for a solo developer or someone just trying to get something into Azure as painlessly as possible.

Verifying WebJob Files With Kudu

As a tiny little side note, I wanted to point out something if you ever needed to hastily change a setting on a WebJob on the fly, or you needed to validate that the WebJob files were actually deployed properly. The way to do this is using “Kudu”. The name of this has sort of changed but it’s all the same thing.

Inside your Azure Web App, select “Advanced Tools” from the side menu :

Notice how the icon is sort of a “K”… Like K for Kudu… Sorta.

Anyway, once inside you want to navigate to Debug Tools -> CMD. From here you can navigate through the files that actually make up your website. Most notably you want to head along to /site/wwwroot/App_Data/ where you will find your WebJob files. You can add/remove files on the fly, or even edit your appsettings.json files for a quick and dirty hack to fix bad configuration.

What’s Next?

So far all of our WebJobs have printed out “Hello World!” on repeat. But we can actually “Schedule” these jobs to run every minute, hour, day, or some combination of the lot. Best of all, we can do all of this with a single configuration file, without the need to write more C# code! You can check out Part 4 right here!