The most popular method of managing Azure resources in a programmatic fashion is Azure Resource Management Templates – or ARM templates for short. Much like Terraform, it’s a desired state type tool that you can define what you need, but Azure will work out the actual details of how to make it so (For the most part anyway!).

Over the years, I’ve ran into a few gotchas with these templates that I seem to forget and run into time and time again. Things that on the surface should be simple, but actually are confusing as hell. Often I end up googling the same issue every 3 months when I run into it again. So rather than do a post for each of these, I thought, why not combine them all together in a somewhat cheatsheet. If I’m having to constantly look these up, maybe you are too!

For now, I’ve named this “3 annoying gotchas”, but I’m likely to come back and edit this so maybe by the time you read this, we will be a little higher!

Let’s get started!

You Need To “Concat” A Database Connection String

In my ARM templates, I typically spin up an Azure SQL Database and a Keyvault instance. I make the Keyvault instance rely on the SQL Database, and immediately take the connection string and push it into keyvault. I do this so that there is never a human interaction that sees the connection string, it’s just used inside the ARM template, and straight into Keyvault.

But there’s an annoying gotcha of course! How do you get the connection string of an Azure SQL database in an ARM Template? You can’t! (Really, you can’t!). Instead you need to use string concatenation to build your connection string for storage.

As an example (And note, this is heavily edited, but should give you some idea) :

    "parameters" : {
        "sqlPassword" : {
            "type" : "securestring"
    "variables": {
        "sqlServerName": "MySQLServerName", 
        "sqlDbName" : "MySqlDatabase"
      "type": "Microsoft.KeyVault/vaults/secrets",
      "name": "MyVault/SQLConnectionString",
      "apiVersion": "2018-02-14",
      "location": "[resourceGroup().location]",
      "properties": {
        "value": "[concat('Server=tcp:',reference(variables('sqlserverName')).fullyQualifiedDomainName,',1433;Initial Catalog=',variables('sqlDbName'),';Persist Security Info=False;User ID=',reference(variables('sqlserverName')).administratorLogin,';Password=',parameters('sqlPassword'),';Connection Timeout=30;')]"

Or if we pull out just the part that is creating our SQL Connection String :

[concat('Server=tcp:',reference(variables('sqlserverName')).fullyQualifiedDomainName,',1433;Initial Catalog=',variables('sqlDbName'),';Persist Security Info=False;User ID=',reference(variables('sqlserverName')).administratorLogin,';Password=',parameters('sqlPassword'),';Connection Timeout=30;')]

So why do we have to go to all of this hassle just to get a connection string? There’s actually two reasons :

  • A connection string may have additional configuration, such as a timeout value. So it’s usually better that you get the connection string exactly how you need it.
  • But the most important reason is that a SQL Password, when set in Azure, is a blackbox. There is no retrieving it. You can only reset it. So from the ARM Templates point of view, it can’t ask for the connection string of a SQL database because it would never be able to get the password.

On that last note, it’s why when you try and grab your connection string from the Azure portal, it comes with a {your_password} field where your password will be.

Connecting Web Apps/Functions To Application Insights Only Requires The Instrumentation Key

I talked about this a little in a previous post around connecting Azure Functions to App Insights. I think it could be a hold over from the early days of App Insights when there wasn’t as much magic going on, and you really did have to do a bit of work to wire up Web Applications to App Insights. However now, it’s as simple as adding the Instrumentation Key as an app setting and calling it a day.

For example :

  "value": "[reference(resourceId('Microsoft.Insights/components', variables('AppInsightsName')), '2014-04-01').InstrumentationKey]"

Also notice in this case, we can get the entire instrumentation key via the ARM template. I want to point this out because I’ve seen people manually create the Application Insights instance, then loop back around and run the ARM template with the key as an input parameter. You don’t have to do this! You can grab it right there in the template.

And again, as long as you use the appsetting name of “APPINSIGHT_INSTRUMENTATIONKEY” on either your Web Application or Azure Function, you are good to go!

Parameters File Cannot Contain Template Expressions

There are many times where you read a tutorial that uses a parameters file with a keyvault reference.

As an example, consider the following parameters file :

"parameters": {
    "serviceBusName": {
        "reference": {
            "keyVault": {
                "id": "/subscriptions/GUID/resourceGroups/KeyVaultRG/providers/Microsoft.KeyVault/vaults/KeyVault"
        "secretName": "serviceBusName"

The idea behind this is that for the parameter of serviceBusName, we should go to keyvault to find that value. However, there’s something very wrong with this. We have a hardcoded subscription and resource group name. It makes far more sense for these to be dynamic, because between Dev, Test and Prod, we may have different subscriptions and/or resource groups right?

So, you may think this could be solved like so :

"parameters": {
    "serviceBusName": {
        "reference": {
            "keyVault": {
                "id": "[resourceId(subscription().subscriptionId, resourcegroup().name, 'Microsoft.KeyVault/vaults', parameters('KeyVaultName'))])"
        "secretName": "serviceBusName"

But unfortunately :

resourceId function cannot be used while referencing parameters

You cannot use the resourceId function, or really any template expressions (Not even concat), inside a parameters file. It’s static text only. What that means is, frankly, that references to keyvault from a parameters file is pointless. In no situation have I ever wanted a hardcoded subscription ID in an ARM template, it just wouldn’t happen.

Microsoft’s solution for this is to push for the use of nested templates. In my personal view, this adds a tonne of complexity, but it’s an option. What I generally end up doing is trying to avoid Keyvault secrets at all. Usually my C# application is talking to keyvault anyway so there is no need for additional parameters like the above.

In anycase, the actual point of this section is to say that a parameters file cannot be dynamic without using nested templates. Whether that be for keyvault references or something else, you’ll have to find a way around using dynamic parameters.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently been doing battle trying to get Azure Application Insights playing nice with an Azure Function. Because they are from the same family I thought there wouldn’t be an issue but, Microsoft’s lack of documentation is really letting down the team here. This will be a short and sweet post that hopefully clears some things up.

Adding Application Insights

So the first thing that is different about using Application Insights with an Azure Function is that you don’t need any additional nuget packages. Under the hood, the packages that a function relies on out of the box themselves rely on the application insights package. So theoretically, everything is set up for you.

The only thing you actually need to do is set an application key of “APPINSIGHTS_INSTRUMENTATIONKEY” somewhere in your application.

For a function hosted on Azure, this is easy, you can do this on the configuration tab of your function and add your instrumentation key there.

Locally, you will be using either local.settings.json or appsettings.json depending on how your function is set up. Generally, either will work but it mostly depends on your individual project how you are managing settings locally.

Again, you don’t need to do anything to read this key, you just need to have it there and automagically, the function will wire everything up.

Now the other thing to note is that in the Azure Portal, on a Function, you’ll have an option to “Enable Application Insights” if you haven’t already. It looks a bit like so :

But actually all this does is add the instrumentation key to your appsettings. Just like we do above. It doesn’t do any fancy behind the scenes wiring up. It’s literally just a text field that wires everything up for you.

Configuring Application Insights For Azure Functions

So the next thing I found was that you were supposedly able to edit your host.json file of your function, and add in settings for insights. But what I found is that there is a tonne of settings that aren’t documented (yet?). The official documentation is located here : It looks good, but doesn’t seem to to have quite as many options for Application Insights as say, using it in a regular C# app.

So I actually had to dig into the source code. That took me here : These are the actual settings that you can configure, some of which you cannot find documentation for but can make some educated guesses on what they do.

For me, I needed this :

"dependencyTrackingOptions": {
    "enableSqlCommandTextInstrumentation" :  true

This enables Application Insights to not only capture that a SQL command took place, but capture the actual text of the SQL so that I can debug any slow queries I see happening inside the application.

Again, I couldn’t find any documentation on setting this variable up, except the original source code. Yay open source!

If It Doesn’t Work, Chances Are There Is A Bug

The other thing I noticed about Application Insights in general is that there are a tonne of bugs that hang around for much longer than you might expect. For example, when I first added my app insights key to my function, I wasn’t collecting any information about SQL queries coming from the app. Asking around, people just assumed maybe you had to add another nuget package for that, or that I had set something up wrong.

Infact, there is a bug that has been 3 – 6 months that certain versions of EntityFramework suddenly don’t work with App Insights. Insights would capture the correct request, but it wouldn’t log any SQL dependency telemetry with any version of EFCore above 3.1.4.

How does this help you? Well it probably doesn’t unless specifically you are missing SQL queries from your App Insights. But I just want to point out that by default, out of the box, adding Application Insights to an Azure Function should capture *everything*. You do not have to do anything extra. If you are not capturing something (For example, I saw another bug that it wasn’t capturing HttpClient requests correctly), then almost certainly it will be the mishmash of versions of something you are using causing the problem.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Azure’s Key vault is a great secret store with excellent support in .NET (obviously!). But I recently ran into an issue that sent me in circles trying to work out how to load certificates that have been loaded into Key Vault, from .NET. Or more specifically, there is a bunch of gotchas when loading certificates into a .NET/.NET Core application running in Azure App Service. Which given both are Azure services, you’ll probably run into one time or another.

But first, let’s just talk about the code to load a certificate from Key vault in general.

C# Code To Load Certificates From Keyvault

If you’ve already got a Key vault instance (Or have newly created one), you’ll need to ensure that you, as in your login to Azure, has been added to the access policy for the Key vault.

A quick note on Access Policies in general. They can become pretty complex and Key vault recently added the ability to use role based authentication and a few other nifty features. You can even authenticate against KV using a local certificate on the machine. I’m going to describe how I generally use it for my own projects and other small teams, which involves Managed Identity, but if this doesn’t work for you, you’ll need to investigate the best way of authenticating individuals against Key vault.

Back to the guide. If you created the Key vault yourself, then generally speaking you are automatically added to the access policy. But you can check by looking at the Key vault instance, and checking Access Policies under Settings and ensuring that your user has access.

Next, on your local machine, you need to login to Azure because the .NET Code actually uses the managed identity to gain access to Keyvault. To do that we need to run a couple of Powershell commands.

First, run the command

az login

This will pop up a browser window asking you to login to Azure. Complete this, and your powershell window should update with the following :

You have logged in. Now let us find all the subscriptions to which you have access...
[.. All your subscriptions listed here..]

Now if you only have one subscription, you’re good to go. If you have multiple then you need to do something else :

az account set --subscription "YOUR SUBSCRIPTION NAME THAT HAS KEYVAULT"

The reason you need to do this is once logged into Azure, you only have access to one subscription at a time. If you have multiple subscriptions you need to set the subscription that contains your keyvault instance as your “current” one.

Finally onto the C# code.

Now obviously you’ll want to turn this into a helpful service with a re-useable method, but the actual C# code is simple. Here it is in one block :

var _keyVaultName = $"";
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var _client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = await client.GetSecretAsync(_keyVaultName, secretName);
var privateKeyBytes = Convert.FromBase64String(secret);
var certificate = new X509Certificate2(privateKeyBytes, string.Empty);

Again, people often leave comments like “Why don’t you load your keyvault name from App Settings?”. Well I do! But when I’m giving example code I want to break it down to the simplest possible example so that you don’t have to deconstruct it, and rebuild it to suit your own application.

With that out of the way, notice that when we call Key Vault, we don’t actually call “GetCertificate”. We just ask to get a secret. If that secret is a text secret, then it will come through as plain text. If it’s a certificate, then actually it will be a Base64 string, which we can then turn into a certificate.

Also note that we aren’t providing any sort of “authentication” to this code, that’s because it uses our managed identity to talk to Key vault.

And we are done! This is all the C# code you need. Now if you’re hosting on Azure App Service.. then that’s a different story.

Getting It Working On Azure App Service

Now I thought that deploying everything to an Azure App Service would be the easy part. But as it turns out, it’s a minefield of gotchas.

The first thing is that you need to turn on Managed Identity for the App Service. You can do this by going to your App Service, then Settings => Identity. Turn on System Assigned identity and save.

Now when you go back to your Key vault, go to Access Policies and search for the name of your App Service. Then you can add permissions for your App Service as if it was an actual user getting permissions.

So if you’re loading certificates, there is 3 main gotchas, and all 3 will generate this error :

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: The system cannot find the file specified.

App Service Plan Level

You must be on a Basic plan or above for the App Service. It cannot be a shared instance (Either F1 or D1). The reason behind this is that behind the scenes, on windows, there is some “User Profile” gunk that needs to be loaded for certificates of any type to be loaded. This apparently does not work on shared plans.

Extra Application Setting

You must add an Application setting on the App Service called “WEBSITE_LOAD_USER_PROFILE” and set this to 1. This is similar to the above and is about the gunk that windows needs, but is apparently not loaded by default in Azure.


Extra Certificate Flags In C#

In your C# code, the only thing that seemed to work for me was adding a couple of extra flags when loading your certificate from the byte array. So we change this :

var certificate = new X509Certificate2(privateKeyBytes, string.Empty);

To this :

var certificate = new X509Certificate2(privateKeyBytes, string.Empty, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);

Finally, with all of these extras, you should be able to load a certificate from Key vault into your Azure App Service!

Further Troubleshooting

If you get the following error :

Microsoft.Azure.KeyVault.Models.KeyVaultErrorException: Access denied

In almost all cases, the managed identity you are running under (either locally or in Azure App Service) does not have access to the Key vault instance. If you’re getting this when trying to develop locally, generally I find it’s because you’ve selected the wrong subscription after using az login. If you’re running this in an App Service, I find it’s typically because you haven’t set up the managed identity between the App Service and Key vault.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Lately I seem to have run into a tonne of these types of errors when trying to host .NET Core applications inside IIS

HTTP Error 502.5 - ANCM Out-Of-Process Startup Failure


HTTP Error 500.30 - ANCM In-Process Start Failure

And to be honest, they have been incredibly hard to debug. If you search for these errors on Google, you’ll find hundreds of stack overflow posts detailing people running into them, often with very little follow up on how they solved the issue. Sometimes they’ll even go as far to say “I re-installed everything and now it works” which in most cases, is not going to be viable. “Hey boss, let me just wipe this server real quick” generally doesn’t cut it.

But hopefully, fingers crossed, this will be the last blog post you’ll ever have to read on these error messages!

Why Are There Two Error Messages?

So first let’s touch on why there are two different messages (And error codes). Essentially, they are the exact same error but refer to different hosting models when running IIS infront of .NET Core. We won’t dive too deep into what these hosting models are, as at some point (.NET Core 2.1+ I believe) the default hosting model got swapped from Out-Of-Process to In-Process, so it very well could depend entirely on the version of .NET Core you are running.

Just know that for all practical purposes, these two error messages are the same so if you are following some blog post that is talking about one of these error messages, but you have the other, you can still follow along as it may well solve your problem.

Solution #1 – Incorrect Or Non-Existent .NET Core Hosting Bundle

So if you’re using something like Azure App Services that abstract away all the server management for you, then you can probably skip this, but if you are managing the server yourself (Or you’re trying to run a .NET Core application on your PC), then definitely read on.

If you followed our guide to running .NET Core on IIS (Which you definitely should) you would know that you need to install the .NET Core “Hosting Bundle” for .NET Core web apps to work inside IIS. This is also required even if you use self contained deployments. It is also required even if you install the .NET Core runtime on the machine. I’ll repeat, you cannot host .NET Core web applications inside IIS unless you install the hosting bundle.

If you are on your server and you aren’t sure if you have the hosting bundle installed, go to Add/Remove Programs in Windows and search for hosting :

If you don’t have this, then you can go here : and download the hosting bundle :

It’s also important to note. Download the hosting bundle that matches the version of .NET Core you are running. I would presume that things are backwards compatible and if you download a newer version, it will work with older versions of .NET Core, but it’s super important to have atleast the same version as your actual code.

And finally. Restart your machine. I cannot make that any more clear. I cannot tell you how many people install everything under the sun and nothing seems to “work”, but it’s because they haven’t restarted. For the most part, you can get away with just an IISReset, but restarting the machine is often easier and just makes sure that you are restarting absolutely everything you need to.

For Shared Hosting, you may need to open a ticket with your provider. Again, I’ve seen providers only have the .NET Core 2 Hosting Bundle and people are trying to deploy .NET Core 3.1 applications. It pays to ask!

Still not working? Move onto Solution #2.

Solution #2 – .NET Core Startup Process Fails

This one took me an absolute lifetime to workout, but as it turns out, the .NET Core startup process has to start successfully before IIS can take over. That means if certain startup routines of .NET Core fail, then IIS doesn’t know how to handle the exception and just gives a generic error. Infact, that’s why it’s called an “ANCM Startup Failure”!

So what sort of things are we talking? Well for me, the most common one has been integrating with KeyVault. If you follow the official documentation here, it tells you to add it to your CreateHostBuilder method. CreateHostBuilder is ran on startup and needs to complete *before* IIS can take over, if for some reason your app cannot connect to KeyVault (Firewall, permissions etc), then your applicaiton will crash and IIS will have no option but to bail out as well.

Generally speaking, most of the times I see these ANCM startup failures has been when I’ve added code to the CreateHostBuilder method in my program.cs. Truth be told, it’s actually 100% of the time.

But how can we diagnose them? The easiest way is to actually run our application via Kestrel direct on the server. On a metal server (e.g. One you can RDP to), this means simply going to the folder where our application dlls are, and running “dotnet myapplication.dll” and seeing if it runs. What you should see is that the startup error is printed to the console and should point you in the right direction. Remember, in production configuration, firewalls, keys, and secrets could all be completely different so just because the application starts up fine on your local machine, on a remote machine it could completely blow up.

For sites hosted inside things like Azure App Services, you need to find a way to run the application manually yourself. On Azure, this means using Kudu, but on shared hosting services, you might be completely out of luck. If that’s you, you really have two options :

  • Open a support ticket to see if they can run your application manually in Kestrel like above, and hand you the error. (Again, this might be worth it to see if they even support the version of .NET Core you are intending to run).
  • Guess. Yes guess. Check your applications program.cs and how your HostBuilder is created. Start removing things and redeploying until you find the thing that is breaking it. If possible, create a simple hello world application and deploy it, if it also breaks, then it points more towards the hosting bundle than anything wrong with your startup code.

Debugging With stdout

Still need further debugging. Here’s another tip!

After publishing a .NET Core app, you will find in the publish directory there is a web.config with minimal information, but you should see something like so :

<aspNetCore processPath="dotnet" arguments=".\MyTestApplication.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />

Pretty obvious here, we just change stdoutLogEnabled=”false” to stdoutLogEnabled=”true” and redeploy. This should enable additional IIS logging to ./logs and allow us to debug further. Honestly, I’ve found this really really hit and miss. I’ve found that running the application directly from the console on the server to be much more helpful, but this is also an option if you have no other way.

Still Not Working?

Still not working? Or solved it a different way? Drop a comment below with a link to a GIT Repo and let the community help. This is honestly an infuriating error and I’ve had 20 year veteran programmers pull their hair out trying to solve it, so seriously drop a line. If you aren’t comfortable leaving a comment to your code, try wade[at] and I’ll see what I can do!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post we talked about setting up IIS to host a .NET Core web app, and since then, I’ve had plenty of questions regarding why does ASP.NET Core even have the Kestrel Web Server (The inbuilt web server of the .NET Core platform), if you just host it in IIS anyway? Especially now that .NET Core can now run InProcess with IIS, it’s almost like having Kestrel is redundant. In some ways that may be true and for *most* use cases, you will want to use IIS. But here’s my quick rundown of why that might be (or might not be) the case.

Kestrel Features (Or Lack Thereof)

Now Kestrel is pretty featured, but it’s actually lacking a few fundamentals that may make it a bit of a blocker. Some of which are (but not limited to) :

  • HTTP access logs aren’t collected in Kestrel. For some, this doesn’t matter. But for others that use these logs as a debugging tool (Or pump them to something like Azure Diagnostics etc), this could be an issue.
  • Multiple apps on the same port is not supported in Kestrel simply by design. So when hosting on IIS, IIS itself listens on port 80 and “binds” websites to specific URLs. Kestrel on the other hand binds to an IP (Still with a website URL if you like), but you can’t then start another .NET Core Kestrel instance on the same port.  It’s a one to one mapping.
  • Windows Authentication does not exist on Kestrel as it’s cross platform (more on that later).
  • IIS has a direct FTP integration setup if you deploy via this method (Kestrel does not)
  • Request Filtering (e.g. Blocking access to certain file extensions, folders, verbs etc) is much more fully featured in IIS (Although some of this can be “coded” in Kestrel in some cases).
  • Mime Type Mapping (e.g. A particular file extension being mapped to a particular mime type on response) is much better in IIS.

I would note that many feature comparisons floating around out there are very very dated. Things like request limits (e.g. Max file upload size/POST body size) and SSL certificates have been implemented in Kestrel for a few years now. Even the above list may slowly shrink over time, but for now, just know there there are certainly things inside Kestrel that you may be used to inside IIS.

Cross Platform

Obviously .NET Core and by extension, Kestrel, is cross platform. That means it runs on Windows, Linux and Mac. IIS on the other hand is Windows only and probably forever will be. For those not on Windows systems, the choice of even using IIS is a non existent one. In these cases, you are comparing Kestrel to things like NGINX which can act as a reverse proxy and forward requests to Kestrel.

In some of these cases, especially if you don’t have experience using NGINX or Apache, then Kestrel is super easy to run on Linux (e.g. Double click your published app).

Server Management

Just quickly anecdotally, the management of IIS is much easier than managing a range of Kestrel web services. Kestrel itself can be run from the command line, but therein lies the issue where you often have to set up your application as a Windows Service (More info here : Hosting An ASP.NET Core Web App As A Windows Service In .NET Core 3) so that it will automatically start on server restarts etc. You end up with all these individual “Windows Services” that you have to install/manage separately.  On top of that, everything being code based can be both a boon for source control and IAC, but it can also be a headache to configure features across a range of applications. If you are used to setting things up in IIS and think that it’s a good experience, then you are better at sticking with it.

Existing Setup Matters

Your existing setup may affect your decision to use Kestrel vs IIS in two very distinct ways.

If you are already using IIS, and you want to keep it that way and manage everything through one portal, then you are going to pick IIS no matter what. And it makes sense, keeping things simple is almost always the right route. And IIS has great support for hosting .NET Core web apps (Just check our guide out here : Hosting An ASP.NET Core Web Application In IIS).

But conversely, you may have IIS installed already, but it isn’t set up to host .NET Core apps and you don’t want to (or can’t) install anything on the machine. In these cases, Kestrel is great for sitting side by side with your existing web server and it running completely in isolation. It can run listening to a non standard port, leaving your entire IIS setup intact. This also goes for hosting on a machine that doesn’t currently have IIS installed, Kestrel can be stood up immediately from code with little to no configuration without modifying the machine’s features or running an installer of any kind. It’s basically completely portable.

Microsoft’s Recommendation

You’ll often find articles mentioning that Microsoft recommends using a reverse proxy, whether that be IIS, NGINX or Apache, to sit infront of Kestrel and be your public facing web server/proxy. Anecdotally, I’ve found this to be mentioned less and less as Kestrel gains more features. You will actually struggle to find mention of *not* using Kestrel on public facing websites in the official documentation nowadays (For .NET Core 3+). That’s not to say that it’s a good idea to use Kestrel for everything, just that it’s less of a security risk to these days.

My Final Take

Personally,  I use both.

I use IIS as my defacto choice as 9 times out of 10, when working in an Windows environment, there is already a web server setup hosting .NET Framework apps. It just makes things easier to setup and manage, especially for developers that are new to .NET Core, they don’t have another paradigm to learn.

I use Kestrel generally when hosting smaller API’s that either don’t have IIS setup, don’t need IIS setup (e.g. a web app that just gets run on the local machine for local use), or have IIS setup but they don’t want to install the hosting bundle. Personally, I generally don’t end up using many IIS features like Windows Auth or FTP, so I don’t miss these features.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

For the past few years I’ve been almost exclusively using Azure’s PAAS Websites to host my .NET Core applications. Whereby I set up my Azure Devops instance to point to my Azure Website, and at the click of the button my application is deployed and you don’t really have to think too hard about “how” it’s being hosted.

Well, recently I had to set up a .NET Core application to run on a fresh server behind IIS and while relatively straight forward, there were a few things I wish I knew beforehand. Nothing’s too hard, but some guides out there are waaayyy overkill and take hours to read let alone implement what they are saying. So hopefully this is a bit more of a straight forward guide.

You Need The ASP.NET Core Hosting Bundle

One thing that I got stuck on early on was that for .NET Core to work inside IIS, you actually need to do an install of a “Hosting Module” so that IIS knows how to run your app.

This actually frustrated me a bit at first because I wanted to do “Self Contained” deploys where everything the app needed to run was published to the server. So… If I’m publishing what essentially amounts to the full runtime with my app, why the hell do I still need to install stuff on the server!? But, it makes sense. IIS can’t just magically know how to forward requests to your app, it needs just a tiny bit of help. Just incase someone is skimming this post, I’m going to bold it :

Self contained .NET Core applications on IIS still need the ASP.NET Core hosting bundle

So where do you get this “bundle”. Annoyingly it’s not on the main .NET Core homepage and you need to go to the specific version to get the latest version. For example here :

It can be maddening trying to find this particular download link. It will be on the right hand side buried in the runtime for Windows details.

Note that the “bundle” is the module packaged with the .NET Core runtime. So once you’ve installed this, for now atleast, self contained deployments aren’t so great because you’ve just installed the runtime anyway. Although for minor version bumps it’s handy to keep doing self contained deploys because you won’t have to always keep pace with the runtime versions on the server.

After installing the .NET Core hosting bundle you must restart the server OR run an IISReset. Do not forget to do this!

In Process vs Out Of Process

So you’ve probably heard of the term “In Process” being bandied about in relation to .NET Core hosting for a while now. I know when it first came out in .NET Core 2.2, I read a bit about it but it wasn’t the “default” so didn’t take much notice. Well now the tables have turned so to speak, so let me explain.

From .NET Core 1.X to 2.2, the default way IIS hosted a .NET Core application was by running an instance of Kestrel (The .NET Core inbuilt web server), and forwarding the requests from IIS to Kestrel. Basically IIS acted as a proxy. This works but it’s slow since you’re essentially doing a double hop from IIS to Kestrel to serve the request. This method of hosting was dubbed “Out Of Process”.

In .NET Core 2.2, a new hosting model was introduced called “In Process”. Instead of IIS forwarding the requests on to Kestrel, it serves the requests from within IIS. This is much faster at processing requests because it doesn’t have to forward on the request to Kestrel. This was an optional feature you could turn on by using your csproj file.

Then in .NET Core 3.X, nothing changed per-say in terms of how things were hosted. But the defaults were reversed so now In Process was the default and you could use the csproj flag to run everything as Out Of Process again.

Or in tabular form :

Version Supports Out Of Process Supports In Process Default
.NET Core <2.2 Yes No N/A
.NET Core 2.2 Yes Yes Out Of Process
.NET Core 3.X Yes Yes In Process

Now to override the defaults, you can add the following to your csproj file (Picking the correct hosting model you want).


As to which one you should use? Typically, unless there is a specific reason you don’t want to use it, InProcess will give you much better performance and is the default in .NET Core 3+ anyway.

After reading this section you are probably sitting there thinking… Well.. So I’m just going to use the default anyway so I don’t need to do anything? Which is true. But many guides spend a lot of time explaining the hosting models and so you’ll definitely be asked questions about it from a co-worker, boss, tech lead etc. So now you know!

UseIIS vs UseIISIntegration

There is one final piece to cover before we actually get to setting up our website. Now *before* we got the “CreateDefaultBuilder” method as the default template in .NET Core, you had to build your processing pipeline yourself. So in your program.cs file you would have something like :

var host = new WebHostBuilder()

So here we can actually see that there is a call to UseIISIntegration . There is actually another call you may see out in the wild called UseIIS  without the integration. What’s the difference? It’s actually quite simple. UseIISIntegration  sets up the out of process hosting model, and UseIIS  sets up the InProcess model. So in theory, you pick one or the other but in practice CreateDefaultBuilder  actually calls them both and later on the SDK works out which one you are going to use based on the default or your csproj flag described above (More on that in the section below).

So again, something that will be handled for you by default, but you may be asked a question about.

Web.Config Shenanigans

One issue we have is that for IIS to understand how to talk to .NET Core, it needs a web.config file. Now if you’re using IIS to simply host your application but not using any additional IIS features, your application probably doesn’t have a web.config to begin with. So here’s what the .NET Core SDK does.

If you do not have a web.config in your application, when you publish your application, .NET Core will add one for you. It will contain details for IIS on how to start your application and look a bit like this :

  <location path="." inheritInChildApplications="false">
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      <aspNetCore processPath="dotnet" arguments=".\MyTestApplication.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />

So all it’s doing is adding a handler for IIS to be able to run your application (Also notice it sets the hosting model to InProcess – which is the default as I’m running .NET Core 3.X).

If you do have a web.config, it will then append/modify your web.config to add in the the handler on publish. So for example if you are using web.config to configure.. I don’t know, mime types. Or maybe using some basic windows authorization. Then it’s basically going to append in the handler to the bottom of your own web.config.

There’s also one more piece to the puzzle. If for some reason you decide that you want to add in the handler yourself (e.g. You want to manage the arguments passed to the dotnet command), then you can actually copy and paste the above into your own web.config.

But. There is a problem. 

The .NET Core SDK will also always try and modify this web.config on publish to be what it *thinks* the handler should look like. So for example I copied the above and fudged the name of the DLL it was passing in as an argument. I published and ended up with this :

arguments=".\MyTestApplication.dll .\MyTestApplicationasd.dll"

Notice how it’s gone “OK, you are running this weird dll called MyTestApplicationasd.dll, but I think you should run MyTestApplication.dll instead so I’m just gonna add that for you”. Bleh! But there is a way to disable this!

Inside your csproj you can add a special flag like so :


This tells the SDK don’t worry, I got this. And it won’t try and add in what it thinks your app needs to run under IIS.

Again, another section on “You may need to know this in the future”. If you don’t use web.config at all in your application then it’s unlikely you would even realize that the SDK generates it for you when publishing. It’s another piece of the puzzle that happens in the background that may just help you in the future understand what’s going on under the hood when things break down.

An earlier version of this section talked about adding your own web.config to your project so you could point IIS to your debug folder. On reflection, this was bad advice. I always had issues with projects locking and the “dotnet build” command not being quite the same as the “dotnet publish”. So for that reason, for debugging, I recommend sticking with IIS Express (F5), or Kestrel by using the dotnet run command. 

IIS Setup Process

Now you’ve read all of the above and you are ready to actually set up your website. Well that’s the easy bit!

First create your website in IIS as you would a standard .NET Framework site :

You’ll notice that I am pointing to the *publish* folder. As described in the section above about web.config, this is because my particular application does not have a web.config of it’s own and therefore I cannot just point to my regular build folder, even if I’m just testing things out. I need to point to the publish folder where the SDK has generated a web.config for me.

You’ll also notice that in my case, I’m creating a new Application Pool. This is semi-important and I’ll show you why in a second.

Once you’ve create your website. Go to your Application Pool list, select your newly created App Pool, and hit “Basic Settings”. From there, you need to ensure that .NET CLR Version is set to “No Managed Code”. This tells IIS not to kick off the .NET Framework pipeline for your .NET Core app.

Obviously if you want to use shared application pools, then you should create a .NET Core app pool that sets up No Managed Code.

And that’s it! That’s actually all you need to know to get up and running using IIS to host .NET Core! In a future post I’ll actually go through some troubleshooting steps, most notably the dreaded HTTP Error 403.14 which can mean an absolute multitude of things.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Note, this tutorial is about hosting an ASP.NET Core web app as a windows service, specifically in .NET Core 3.

If you are looking to host a web app as a service in .NET Core 2, check out this other tutorial : Hosting An ASP.NET Core Web Application As A Windows Service In .NET Core 2

If you are looking to run a Windows Service as a “worker” or for background tasks, then you’ll want this tutorial : Creating Windows Services In .NET Core – Part 3 – The “.NET Core Worker” Way

This is actually somewhat of a duplicate of a previous post I did here. But that was using .NET Core 2+, and since then, things have changed quite a bit. Well… Enough that when I tried to follow my own tutorial recently I was wondering what the hell I was going on about when nothing worked for me this time around.

Why A Web App As A Windows Service

So this tutorial is about running a Web App as a Windows Service. Why would that ever be the case? Why would you not have a web app running under something like IIS? Or why a Windows Service specifically?

Well the answer to why not under IIS is that in some cases you may not have IIS on the machine. Or you may have IIS but it’s not set up to host .NET Core apps anyway. In these cases you can do what’s called a self contained deploy (Which we’ll talk about soon), where the web app runs basically as an exe that you can double click and suddenly you have a fully fledged web server up and running – and portable too.

For the latter, why a windows service? Well if we follow the above logic and we have an exe that we can just click to run, then a windows service just gives us the ability to run on startup, run in the “background” etc. I mean, that’s basically all windows services are right? Just the OS running apps on startup and in the background.

Running Our Web App As A Service

The first thing we need to do is make our app compile down to an EXE. Well.. We don’t have to but it makes things a heck of a lot easier. To do that, we just need to edit our csproj and add the OutputType of exe. It might end up looking like so :


In previous versions of .NET Core you had to install the package Microsoft.AspNetCore.Hosting.WindowsServices , however as of right now with .NET Core 3+, you instead need to use Microsoft.Extensions.Hosting.WindowsServices . I tried searching around for when the change happened, and why, and maybe information about differences but other than opening up the source code I couldn’t find much out there. For now, take my word on it. We need to install the following package into our Web App :

Install-Package Microsoft.Extensions.Hosting.WindowsServices

Now there is just a single line we need to edit. Inside program.cs, you should have a “CreateHostBuilder” method. You might already have some custom configuration going on, but you just need to tack onto the end “UseWindowsServices()”.

return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>

And that’s all the code changes required!

Deploying Our Service

… But we are obviously not done yet. We need to deploy our service right!

Open a command prompt as an Administrator, and run the following command in your project folder to publish your project :

dotnet publish -c Release

Next we can use standard Windows Service commands to install our EXE as a service. So move your command prompt to your output folder (Probably along the lines of C:\myproject\bin\Release\netcoreapp3.0\publish). And run something like so to install as a service :

sc create MyApplicationWindowsService binPath= myapplication.exe

Doing the full install is usually pretty annoying to do time and time again, so what I normally do is create an install.bat and uninstall.bat in the root of my project to run a set of commands to install/uninstall. A quick note when creating these files. Create them in something like Notepad++ to ensure that the file type is UTF8 *without BOM*. Otherwise you get all sorts of weird errors :

The contents of my install.bat file looks like :

sc create MyService binPath= %~dp0MyService.exe
sc failure MyService actions= restart/60000/restart/60000/""/60000 reset= 86400
sc start MyService
sc config MyService start=auto

Keep the weird %~dp0 character there as that tells the batch process the current directory (Weird I know!).

And the uninstall.bat :

sc stop MyService
timeout /t 5 /nobreak > NUL
sc delete MyService

Ensure these files are set to copy if newer in Visual Studio, and now when you publish your project, you only need to run the .bat files from an administrator command prompt and you are good to go!

Doing A Self Contained Deploy

We talked about it earlier that the entire reason for running the Web App as a Windows Service is so that we don’t have to install additional tools on the machine. But that only works if we are doing what’s called a “self contained” deploy. That means we deploy everything that the app requires to run right there in the publish folder rather than having to install the .NET Core runtime on the target machine.

All we need to do is run our dotnet release command with a few extra flags :

dotnet publish -c Release -r win-x64 --self-contained

This tells the .NET Core SDK that we want to release as self contained, and it’s for Windows.

Your output path will change from bin\Release\netcoreapp3.0\publish  to \bin\Release\netcoreapp3.0\win-x64\publish

You’ll also note the huge amount of files in this new output directory and the size in general of the folder. But when you think about it, yeah, we are deploying the entire runtime so it should be this large.

Content Root

The fact that .NET Core is open source literally saves hours of debugging every single time I work on a greenfields project, and this time around is no different. I took a quick look at the actual source code of what the call to UseWindowsService does here. What I noticed is that it sets the content root specifically for when it’s running under a Windows Service. I wondered how this would work if I was reading a local file from disk inside my app, while running as a Windows Service. Normally I would just write something like :


But… Obviously there is something special when running under a Windows Service context. So I tried it out and my API bombed. I had to check the Event Viewer on my machine and I found :

Exception Info: System.IO.FileNotFoundException: Could not find file 'C:\WINDOWS\system32\myfile.json'.

OK. So it looks like when running as a Windows Service, the “root” of my app thinks it’s inside System32. Oof. But, again, looking at the source code from Microsoft gave me the solution. I can simply use the same way they set the content root to load my file from the correct location :

File.ReadAllText(Path.Combine(AppContext.BaseDirectory, "myfile.json"));

And we are back up and running!


Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is part 5 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles
Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

Are You A Visual Learner?

If you are a visual learner (or you are here actually looking to pass your Azure exams), there is a great video course that I highly recommend from Scott Duffy that covers many Azure serverless functions on the way to actually passing the Azure Developer exam. While we cover WebJobs pretty well here, I still recommend checking it out if you are interested in a more indepth view on Azure functions as a whole.
View Azure Developers Course Here

Where Are We Up To?

Thus far, we’ve mostly been playing around with what amounts to a console app and running it in Azure. If anything it’s just been a tidy environment for running little utilities that we don’t want to spin up a whole VM for. But if you’ve used WebJobs using the full .NET Framework before, you know there are actually libraries to get even more out of Web Jobs. Previously this wasn’t available for .NET Core applications, until now!

Installing The WebJob Packages

Go ahead and create stock standard .NET Core console application for us to test things out on.

The first thing we need to do with our new app is install a nuget package from Microsoft that opens up quite a few possibilities.

Install-Package Microsoft.Azure.WebJobs

Note that this needs to be atleast version 3.0.0. At the time of writing this is version 3.0.2 so you shouldn’t have any issues, but just incase you are trying to migrate an old project.

Now something that changed from version 2.X to 3.X of the library is that you now need to reference a second library called “Microsoft.Azure.WebJobs.Extensions”. This is actually pretty infuriating because no where in any documentation does it talk about this. There is this “helpful” message on the official docs :

The instructions tell how to create a WebJobs SDK version 2.x project. The latest version of the WebJobs SDK is 3.x, but it is currently in preview and this article doesn’t have instructions for that version yet.

So it was only through trial and error did I actually get things up and running. So you will want to run :

Install-Package Microsoft.Azure.WebJobs.Extensions

I honestly have no idea what exactly you could do without this as it seemed like literally every single trigger/webjob function I tried would not work without this library. But alas.

Building Our Minimal Timer WebJob

I just want to stress how different version 3.X of WebJobs is to 2.X. It feels almost like a complete rework, and if you are trying to come from an older version, you may think some of this looks totally funky (And that’s because it is a bit). But just try it out and have a play and see what you think.

First thing we want to do is create a class that holds some of our WebJob logic. My code looks like the following :

public class SayHelloWebJob
    public static void TimerTick([TimerTrigger("0 * * * * *")]TimerInfo myTimer)
        Console.WriteLine($"Hello at {DateTime.UtcNow.ToString()}");

So a couple of notes :

  • The Singleton attribute just says that there should only ever be one of this particular WebJob method running at any one time. e.g. If I scale out, it should still only have one instance running.
  • The “TimerTrigger” defines how often this should be run. In my case, once a minute.
  • Then I’m just writing out the current time to see that we are running.

In our Main method of our console application, we want to add a new Host builder object and start our WebJob. The code for that looks like so :

static void Main(string[] args)
    var builder = new HostBuilder()
        .ConfigureWebJobs(webJobConfiguration =>
        .ConfigureServices(serviceCollection => serviceCollection.AddTransient<SayHelloWebJob>())


If you’ve used WebJobs before, you’re probably used to using the JobHostConfiguration  class to do all your configuration. That’s now gone and replaced with the HostBuilder . Chaining is all the rage these days in the .NET Core world so it looks like WebJobs have been given the same treatment.

Now you’ll notice that I’ve added a a call to the chain called AddAzureStorageCoreServices()  which probably looks a little weird given we aren’t using anything with Azure at all right? We are just trying to run a hello world on a timer. Well Microsoft says bollox to that and you must use Azure Blob Storage to store logs and a few other things. You see when you use Singleton (Or just in general what you are doing needs to run on a single instance), it uses Azure Blob Storage to create a lock so no other instance can run. It’s smart, but also sort of annoying if you really don’t care and want to throw something up fast. It’s why I always walk people through creating a Web Job *without* this library, because it starts adding a whole heap of things on top that just confuse what you are trying to do.

Anyway, you will need to create an appsettings.json file in the root of your project (Ensure it’s set to Copy Always to your output directory). It should contain (atleast) the following :

  "ConnectionStrings": {
    "AzureWebJobsDashboard": "AnAzureStorageConnectionString",
    "AzureWebJobsStorage": "AnAzureStorageConnectionString"

For the sake of people banging their heads against the wall with this and searching Google. Here’s something to help them.

If you are getting the following error :

Unhandled Exception: System.InvalidOperationException: Unable to resolve service for type ‘Microsoft.Azure.WebJobs.DistributedLockManagerContainerProvider’ while attempting to activate ‘Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor’.

This means that you haven’t added the call to AddAzureStorageCoreServices()

If you are instead getting the following error :

Unhandled Exception: Microsoft.Azure.WebJobs.Host.Listeners.FunctionListenerException: The listener for function ‘SayHelloWebJob.TimerTick’ was unable to start. —> System.ArgumentNullException: Value cannot be null.
Parameter name: connectionString
at Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(String connectionString)

This means that you did add the call to add the services, but it can’t find the appsetting for the connection strings. First check that you have them formatted correctly in your appsettings file, then ensure that your appsettings is actually being output to your publish directory correctly.

Moving on!

In the root of our project, we want to create a run.cmd  file to kick off our WebJob. We went over this in Part 1 of this series, so if you need a reminder feel free to go back.

I’m just going to go ahead and Publish/Zip my project up to Azure (Again, this was covered in Part 1 incase you need help!). And what do you know!

[12/05/2018 05:35:43 > d0fd33: INFO] Application started. Press Ctrl+C to shut down.
[12/05/2018 05:35:43 > d0fd33: INFO] Hosting environment: Production
[12/05/2018 05:35:43 > d0fd33: INFO] Content root path: D:\local\Temp\jobs\continuous\WebJobSDKExample\ydxruk4e.inh\publish\
[12/05/2018 05:36:00 > d0fd33: INFO] Hello at 12/5/2018 5:36:00 AM

Taking This Further

At this point, so much of this new way of doing WebJobs is simply trial and error. I had to guess and bumble my way through so much just to get to this point. As I talked about earlier, the documentation is way behind this version of the library which is actually pretty damn annoying. Especially when the current release on Nuget is version 3.X, and yet there is no documentation to back it up.

One super handy tool I used was to look at the Github Repo for the Azure Webjobs SDK. Most notably, there is a sample application that you can go and pick pieces of out of to get you moving along.

What’s Next?

To be honest, this was always going to be my final post in this series. Once you reach this point, you should already be pretty knowledgeable about WebJobs, and through the SDK, you can just super power anything you are working on.

But watch this space! I may just throw out a helpful tips post in the future of all the other annoying things I found about the SDK and things I wished I had known sooner!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is part 4 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles
Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

Are You A Visual Learner?

If you are a visual learner (or you are here actually looking to pass your Azure exams), there is a great video course that I highly recommend from Scott Duffy that covers many Azure serverless functions on the way to actually passing the Azure Developer exam. While we cover WebJobs pretty well here, I still recommend checking it out if you are interested in a more indepth view on Azure functions as a whole.
View Azure Developers Course Here

WebJob Scheduling For .NET Core?

So in this part of our tutorial on Web Jobs, we are going to be looking at how we can set WebJobs on schedules for .NET Core. Now I just want to emphasize that this part really isn’t really too .NET Core specific, infact you can use these exact steps to run any executable as a Web Job on a schedule. I just felt like when I was getting up and running, that it was sort of helpful to understand how I could get small little “batch” jobs to run on a schedule in the simplest way possible.

If you feel like you already know all there is about scheduling jobs, then you can skip this part altogether!

Setting WebJob Schedule via Azure Portal

So even though in our last post, we were deploying our WebJob as part of our Web Application, let’s take a step back and pretend that we are still uploading a nice little raw executable via the Azure Portal (For steps on how to make that happen, refer back to Part 1 of this series).

When we go to upload our zip file, we are actually presented with the option to make things scheduled right from the get go.

All we need to do is make our “Type” of WebJob be triggered. As a side note, many people confuse this with “triggering” a WebJob through something like a queue message. It’s not quite the same. We’ll see this in a later post, but for now think of a “triggered” WebJob referring to either a “Manual” trigger, e.g. You click run inside the portal. Or “Scheduled” which is run every X minutes/hours/days etc.

Now our “CRON Expression” is like any other time you’ve used CRONs. Never used them before? Well think of it like a string of numbers that tells a computer how often something should run. You’ll typically see this in Linux systems (Windows Task Scheduler for example is more GUI based to set schedules). Here’s a great guide to understanding CRON expressions :

A big big word of warning. While many systems only allow CRON expressions down to the minute, Azure allows CRON syntax down to the second. So there will be 6 parts to the CRON instead of 5 just incase you can’t work out why it’s not accepting your expression. This is also pretty important so you don’t overwhelm your site thinking that your batch job is going to run once a minute when really it goes crazy at once a second.

Once created, our application will run on our schedule like clockwork!

Editing An Existing WebJob Schedule via Azure Portal

So about editing the schedule of a WebJob in the portal… Well.. You can’t. Annoyingly there is no way via the portal GUI to actually edit the schedule of an existing WebJob. Probably even more frustratingly there is not even a way to stop a scheduled WebJob from executing. So if you imagine that you accidentally set something to run once a second and not once a minute, or maybe your WebJob is going off the rails and you want to stop it immediately to investigate, you can’t without deleting the entire WebJob.

Or so Azure wants you to think!

Luckily we have Kudu to the rescue!

You should be able to navigate to D:\home\site\wwwroot\App_Data\jobs\triggered\{YourWebJobName}\publish  via Kudu and edit a couple of files. Note that this is *not* the same as D:\home\data\jobs\triggered . The data folder is instead for logs and other junk.

Anyway, once inside the publish folder of your WebJob, we are looking for a file called “settings.job”. The contents of which will look a bit like this :

{"schedule":"0 * * * * *"}

This should obviously look familiar, it’s our CRON syntax from before! This is actually how Azure stores our CRON setting when we initially upload our zip. And what do you know, editing this file will update our job to run on the updated schedule! Perfect.

But what about our run away WebJob that we actually wanted to stop? Well unfortunately it’s a bit of a hack but it works. We need to set the contents of our settings.job file to look like :

{"schedule":"0 0 5 31 2 ?"}

What is this doing? It’s saying please only run our job at 5AM on the 31st of February. The top of the class will note there is no such thing as the 31st of the February, so the WebJob will actually never run. As dirty as it feels, it’s the only way I’ve found to stop a scheduled WebJob from running (except of course to just delete the entire WebJob itself).

Uploading A WebJob With A Schedule As Part Of A Website Deploy

Sorry for the butchering of the title on this one, but you get the gist. If we are uploading our WebJob as part of our Website deploy, how do we upload it with our schedule already defined? We obviously don’t want to have to go through the portal or Kudu to edit the schedule every time.

A quick note first. You should already have done Part 3 of this series on WebJobs in .NET Core that explains how we can upload a WebJob as part of an Azure Website deploy. If you haven’t already, please read that post!

Back to deploying our scheduled job. All we do is add a settings.job file to the root of our WebJob project. Remember to set the file to “Copy If Newer” to ensure the file is copied when we publish.

The contents of this file will follow the same format as before. e.x. If we want to run our job once a minute :

{"schedule":"0 * * * * *"}

Now importantly, remember from Part 3 when we wrote a PostPublish script to publish our WebJob to the App_Data folder of our Website? We had to edit the csproj of our Website. It looked a bit like this :

<Target Name="PostpublishScript" AfterTargets="Publish">
    <Exec Command="dotnet publish ..\WebJobExamples.WebJobExample\ -o $(PublishDir)App_Data\Jobs\continuous\WebJobExample"   />

Now we actually need to change the folder for our scheduled WebJob to instead be pushed into our “triggered” folder. So the PostPublish script would look like :

<Target Name="PostpublishScript" AfterTargets="Publish">
    <Exec Command="dotnet publish ..\WebJobExamples.WebJobExample\ -o $(PublishDir)App_Data\Jobs\triggered\WebJobExample"   />

Again I want to note that “triggered” in this context is only referring to jobs that are triggered via a schedule. Jobs that are triggered by queue messages, blob creation etc, are still continuous jobs. The job itself runs continuously, it’s just that a particular “method” in the program will trigger if a queue message comes in etc.

If you publish your website now, you’ll also deploy your WebJob along with it. Easy!

What’s Next?

So far, all our WebJobs have been simple .NET Core console applications that are being run within the WebJob system. The code itself actually doesn’t know that it’s a WebJob at all! But if you’ve ever created WebJobs using FullFramework, you know there are libraries for WebJobs that allow you to trigger WebJobs based on Queue messages, blobs, timers etc all from within code. Up until recently, these libraries weren’t ported to .NET Core, until now! Jump right into it now!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is part 3 of a series on getting up and running with Azure WebJobs in .NET Core. If you are just joining us, it’s highly recommended you start back on Part 1 as there’s probably some pretty important stuff you’ve missed on the way.

Azure WebJobs In .NET Core

Part 1 – Initial Setup/Zip Deploy
Part 2 – App Configuration and Dependency Injection
Part 3 – Deploying Within A Web Project and Publish Profiles

Part 4 – Scheduled WebJobs
Part 5 – Azure WebJobs SDK

Are You A Visual Learner?

If you are a visual learner (or you are here actually looking to pass your Azure exams), there is a great video course that I highly recommend from Scott Duffy that covers many Azure serverless functions on the way to actually passing the Azure Developer exam. While we cover WebJobs pretty well here, I still recommend checking it out if you are interested in a more indepth view on Azure functions as a whole.
View Azure Developers Course Here

Deploying Within A Web Project

So far in this series we’ve been packaging up our WebJob as a stand alone service and deploying as a zip file. There’s two problems with this approach.

  • We are manually having to upload a zip to the Azure portal
  • 9 times out of 10, we just want to package the WebJob with the actual Website, and deploy them together to Azure

Solving the second issue actually solves the first too. Typically we will have our website deployment process all set up, whether that’s manual or via a build pipeline. If we can just package up our WebJob so it’s actually “part” of the website, then we don’t have to do anything special to deploy our WebJob to Azure.

If you’ve ever created an Azure WebJob in .NET Framework, you know in that eco system, you can just right click your web project and select “Add Existing Project As WebJob” and be done with it. Something like this :

Well, things aren’t quite that easy in the .NET Core world. Although I wouldn’t rule out this sort of integration into Visual Studio in the future, right now things are a little more difficult.

What we actually want to do is create a publish step that goes and builds our WebJob and places it into a particular folder in our Web project. Something we haven’t jumped into yet is that WebJobs are simply console apps that live inside the “App_Data” folder of our web project. Basically it’s a convention that we can make use of to “include” our WebJobs in the website deployment process.

Let’s say we have a solution that has a website, and a web job within it.

What we need to do is edit the csproj of our website project. We need to add something a little like the following anywhere within the <project> node :

<Target Name="PostpublishScript" AfterTargets="Publish">
    <Exec Command="dotnet publish ..\WebJobExamples.WebJobExample\ -o $(ProjectDir)$(PublishDir)App_Data\Jobs\continuous\WebJobExample" />

What we are doing is saying when the web project is published, *after* publishing, also run the following command.

Our command is a dotnet publish command that calls publish on our webjob, and says to output it (signified by the -o flag) to a folder in the web projects output directory called “App_Data\Jobs\continuous\WebJobExample”.

Now a quick note on that output path. As mentioned earlier, WebJobs basically just live within the App_Data folder within a website. When we publish a website up to the cloud, Azure basically goes hunting inside these folders looking for webjobs to run. We don’t have to manually specify them in the portal.

A second thing to note is that while we are putting it in the “continuous” folder, you can also put jobs inside the “triggered” folder which are more for scheduled jobs. Don’t worry too much about this for now as we will be covering it in a later post, but it’s something to keep in mind.

Now on our *Website* project, we run a publish command : dotnet publish -c Release . We can head over to our website output directory and check that our WebJob has been published to our web project into the App_Data folder.

At this point, you can deploy your website publish package to Azure however you like. I don’t want to get too in depth on how to deploy the website specifically because it’s less about the web job, and more about how you want your deploy pipeline to work. However below I’ll talk about a quick and easy way to get up and running if you need something to just play around with.

Deploying Your Website And WebJob With Publish Profiles

I have to say that this is by no means some enterprise level deployment pipeline. It’s just a quick and easy way to validate your WebJobs on Azure. If you are a one man band deploying a hobby project, this could well suit your needs if you aren’t deploying all that often. Let’s get going!

For reasons that I haven’t been able to work out yet, the csproj variables are totally different when publishing from Visual Studio rather than the command line. So we actually need to edit the .csproj of our web project a little before we start. Instead of :

<Exec Command="dotnet publish ..\WebJobExamples.WebJobExample\ -o $(ProjectDir)$(PublishDir)App_Data\Jobs\continuous\WebJobExample" />

We want :

<Exec Command="dotnet publish ..\WebJobExamples.WebJobExample\ -o $(PublishDir)App_Data\Jobs\continuous\WebJobExample"   />

So we remove the $(ProjectDir)  variable. The reason is that when we publish from the command line, the $(PublishDir)  variable is relative, whereas when we publish from Visual Studio it’s an absolute path. I tried working out how to do it within MSBuild and have conditional builds etc. But frankly, you are typically only ever going to build one way or the other, so pick whichever one works for you.

If you head to your Azure Web App, on the overview screen, you should have a bar running along the top. You want to select “Get Publish Profile” :

This will download a .publishsettings file to your local machine. We are going to use this to deploy our site shortly.

Inside Visual Studio. Right click your website project, and select the option to Publish. This should pop up a box where you can select how you want to publish your website. We will be clicking the button right down the bottom left hand corner to “Import Profile”. Go ahead and click it, and select the .publishsettings file you just downloaded.

Immediately Visual Studio will kick into gear and push your website (Along with your WebJob) into Azure.

Once completed, we can check that our website has been updated (Visual Studio should immediately open a browser window with your new website), but on top of that we can validate our WebJob has been updated too. If we open up the Azure Portal for our Web App, and scroll down to the WebJob section, we should see the following :

Great! We managed to publish our WebJob up to Azure, but do it in a way that it just goes seamlessly along with our website too. As I mentioned earlier, this isn’t some high level stuff that you want to be doing on a daily basis for large projects, but it works for a solo developer or someone just trying to get something into Azure as painlessly as possible.

Verifying WebJob Files With Kudu

As a tiny little side note, I wanted to point out something if you ever needed to hastily change a setting on a WebJob on the fly, or you needed to validate that the WebJob files were actually deployed properly. The way to do this is using “Kudu”. The name of this has sort of changed but it’s all the same thing.

Inside your Azure Web App, select “Advanced Tools” from the side menu :

Notice how the icon is sort of a “K”… Like K for Kudu… Sorta.

Anyway, once inside you want to navigate to Debug Tools -> CMD. From here you can navigate through the files that actually make up your website. Most notably you want to head along to /site/wwwroot/App_Data/ where you will find your WebJob files. You can add/remove files on the fly, or even edit your appsettings.json files for a quick and dirty hack to fix bad configuration.

What’s Next?

So far all of our WebJobs have printed out “Hello World!” on repeat. But we can actually “Schedule” these jobs to run every minute, hour, day, or some combination of the lot. Best of all, we can do all of this with a single configuration file, without the need to write more C# code! You can check out Part 4 right here!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.