The most popular method of managing Azure resources in a programmatic fashion is Azure Resource Management Templates – or ARM templates for short. Much like Terraform, it’s a desired state type tool that you can define what you need, but Azure will work out the actual details of how to make it so (For the most part anyway!).

Over the years, I’ve ran into a few gotchas with these templates that I seem to forget and run into time and time again. Things that on the surface should be simple, but actually are confusing as hell. Often I end up googling the same issue every 3 months when I run into it again. So rather than do a post for each of these, I thought, why not combine them all together in a somewhat cheatsheet. If I’m having to constantly look these up, maybe you are too!

For now, I’ve named this “3 annoying gotchas”, but I’m likely to come back and edit this so maybe by the time you read this, we will be a little higher!

Let’s get started!

You Need To “Concat” A Database Connection String

In my ARM templates, I typically spin up an Azure SQL Database and a Keyvault instance. I make the Keyvault instance rely on the SQL Database, and immediately take the connection string and push it into keyvault. I do this so that there is never a human interaction that sees the connection string, it’s just used inside the ARM template, and straight into Keyvault.

But there’s an annoying gotcha of course! How do you get the connection string of an Azure SQL database in an ARM Template? You can’t! (Really, you can’t!). Instead you need to use string concatenation to build your connection string for storage.

As an example (And note, this is heavily edited, but should give you some idea) :

{
    "parameters" : {
        "sqlPassword" : {
            "type" : "securestring"
        }
    }, 
    ....
    "variables": {
        "sqlServerName": "MySQLServerName", 
        "sqlDbName" : "MySqlDatabase"
    }, 
    ....
    {
      "type": "Microsoft.KeyVault/vaults/secrets",
      "name": "MyVault/SQLConnectionString",
      "apiVersion": "2018-02-14",
      "location": "[resourceGroup().location]",
      "properties": {
        "value": "[concat('Server=tcp:',reference(variables('sqlserverName')).fullyQualifiedDomainName,',1433;Initial Catalog=',variables('sqlDbName'),';Persist Security Info=False;User ID=',reference(variables('sqlserverName')).administratorLogin,';Password=',parameters('sqlPassword'),';Connection Timeout=30;')]"
      }
    },
}

Or if we pull out just the part that is creating our SQL Connection String :

[concat('Server=tcp:',reference(variables('sqlserverName')).fullyQualifiedDomainName,',1433;Initial Catalog=',variables('sqlDbName'),';Persist Security Info=False;User ID=',reference(variables('sqlserverName')).administratorLogin,';Password=',parameters('sqlPassword'),';Connection Timeout=30;')]

So why do we have to go to all of this hassle just to get a connection string? There’s actually two reasons :

  • A connection string may have additional configuration, such as a timeout value. So it’s usually better that you get the connection string exactly how you need it.
  • But the most important reason is that a SQL Password, when set in Azure, is a blackbox. There is no retrieving it. You can only reset it. So from the ARM Templates point of view, it can’t ask for the connection string of a SQL database because it would never be able to get the password.

On that last note, it’s why when you try and grab your connection string from the Azure portal, it comes with a {your_password} field where your password will be.

Connecting Web Apps/Functions To Application Insights Only Requires The Instrumentation Key

I talked about this a little in a previous post around connecting Azure Functions to App Insights. I think it could be a hold over from the early days of App Insights when there wasn’t as much magic going on, and you really did have to do a bit of work to wire up Web Applications to App Insights. However now, it’s as simple as adding the Instrumentation Key as an app setting and calling it a day.

For example :

{
  "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
  "value": "[reference(resourceId('Microsoft.Insights/components', variables('AppInsightsName')), '2014-04-01').InstrumentationKey]"
}

Also notice in this case, we can get the entire instrumentation key via the ARM template. I want to point this out because I’ve seen people manually create the Application Insights instance, then loop back around and run the ARM template with the key as an input parameter. You don’t have to do this! You can grab it right there in the template.

And again, as long as you use the appsetting name of “APPINSIGHT_INSTRUMENTATIONKEY” on either your Web Application or Azure Function, you are good to go!

Parameters File Cannot Contain Template Expressions

There are many times where you read a tutorial that uses a parameters file with a keyvault reference.

As an example, consider the following parameters file :

"parameters": {
    "serviceBusName": {
        "reference": {
            "keyVault": {
                "id": "/subscriptions/GUID/resourceGroups/KeyVaultRG/providers/Microsoft.KeyVault/vaults/KeyVault"
            },
        "secretName": "serviceBusName"
        }
    }
}

The idea behind this is that for the parameter of serviceBusName, we should go to keyvault to find that value. However, there’s something very wrong with this. We have a hardcoded subscription and resource group name. It makes far more sense for these to be dynamic, because between Dev, Test and Prod, we may have different subscriptions and/or resource groups right?

So, you may think this could be solved like so :

"parameters": {
    "serviceBusName": {
        "reference": {
            "keyVault": {
                "id": "[resourceId(subscription().subscriptionId, resourcegroup().name, 'Microsoft.KeyVault/vaults', parameters('KeyVaultName'))])"
            },
        "secretName": "serviceBusName"
        }
    }
}

But unfortunately :

resourceId function cannot be used while referencing parameters

You cannot use the resourceId function, or really any template expressions (Not even concat), inside a parameters file. It’s static text only. What that means is, frankly, that references to keyvault from a parameters file is pointless. In no situation have I ever wanted a hardcoded subscription ID in an ARM template, it just wouldn’t happen.

Microsoft’s solution for this is to push for the use of nested templates. In my personal view, this adds a tonne of complexity, but it’s an option. What I generally end up doing is trying to avoid Keyvault secrets at all. Usually my C# application is talking to keyvault anyway so there is no need for additional parameters like the above.

In anycase, the actual point of this section is to say that a parameters file cannot be dynamic without using nested templates. Whether that be for keyvault references or something else, you’ll have to find a way around using dynamic parameters.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently been doing battle trying to get Azure Application Insights playing nice with an Azure Function. Because they are from the same family I thought there wouldn’t be an issue but, Microsoft’s lack of documentation is really letting down the team here. This will be a short and sweet post that hopefully clears some things up.

Adding Application Insights

So the first thing that is different about using Application Insights with an Azure Function is that you don’t need any additional nuget packages. Under the hood, the packages that a function relies on out of the box themselves rely on the application insights package. So theoretically, everything is set up for you.

The only thing you actually need to do is set an application key of “APPINSIGHTS_INSTRUMENTATIONKEY” somewhere in your application.

For a function hosted on Azure, this is easy, you can do this on the configuration tab of your function and add your instrumentation key there.

Locally, you will be using either local.settings.json or appsettings.json depending on how your function is set up. Generally, either will work but it mostly depends on your individual project how you are managing settings locally.

Again, you don’t need to do anything to read this key, you just need to have it there and automagically, the function will wire everything up.

Now the other thing to note is that in the Azure Portal, on a Function, you’ll have an option to “Enable Application Insights” if you haven’t already. It looks a bit like so :

But actually all this does is add the instrumentation key to your appsettings. Just like we do above. It doesn’t do any fancy behind the scenes wiring up. It’s literally just a text field that wires everything up for you.

Configuring Application Insights For Azure Functions

So the next thing I found was that you were supposedly able to edit your host.json file of your function, and add in settings for insights. But what I found is that there is a tonne of settings that aren’t documented (yet?). The official documentation is located here : https://docs.microsoft.com/en-us/azure/azure-functions/functions-host-json. It looks good, but doesn’t seem to to have quite as many options for Application Insights as say, using it in a regular C# app.

So I actually had to dig into the source code. That took me here : https://github.com/Azure/azure-webjobs-sdk/blob/v3.0.26/src/Microsoft.Azure.WebJobs.Logging.ApplicationInsights/ApplicationInsightsLoggerOptions.cs. These are the actual settings that you can configure, some of which you cannot find documentation for but can make some educated guesses on what they do.

For me, I needed this :

"dependencyTrackingOptions": {
    "enableSqlCommandTextInstrumentation" :  true
}

This enables Application Insights to not only capture that a SQL command took place, but capture the actual text of the SQL so that I can debug any slow queries I see happening inside the application.

Again, I couldn’t find any documentation on setting this variable up, except the original source code. Yay open source!

If It Doesn’t Work, Chances Are There Is A Bug

The other thing I noticed about Application Insights in general is that there are a tonne of bugs that hang around for much longer than you might expect. For example, when I first added my app insights key to my function, I wasn’t collecting any information about SQL queries coming from the app. Asking around, people just assumed maybe you had to add another nuget package for that, or that I had set something up wrong.

Infact, there is a bug that has been 3 – 6 months that certain versions of EntityFramework suddenly don’t work with App Insights. Insights would capture the correct request, but it wouldn’t log any SQL dependency telemetry with any version of EFCore above 3.1.4.

https://stackoverflow.com/questions/63053334/enable-sql-dependency-in-application-insights-on-azure-functions-with-ef-core
https://github.com/microsoft/ApplicationInsights-dotnet/issues/2032
https://github.com/Azure/Azure-Functions/issues/1613

How does this help you? Well it probably doesn’t unless specifically you are missing SQL queries from your App Insights. But I just want to point out that by default, out of the box, adding Application Insights to an Azure Function should capture *everything*. You do not have to do anything extra. If you are not capturing something (For example, I saw another bug that it wasn’t capturing HttpClient requests correctly), then almost certainly it will be the mishmash of versions of something you are using causing the problem.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Since really .NET Framework 1, the ability for .NET Console apps to parse command line flags and actually provide helpful feedback to the user on even the availability of such flags has been severely lacking.

What do I mean by that? Well when you create a new console application in C#/.NET/.NET Core, your code will be given a simple array of string arguments. These won’t be filtered in any way and will basically just be given to you wholesale. From there, it’s up to you to create your own level of boilerplate to parse them out, run any validation you need to, *then* finally get on to actually creating the logic for your app :

static int Main(string[] args)
{
    //Boilerplate for parsing the args array goes here
}

And it’s not like out of the box, someone running the console application can get helpful feedback on the flags either. If you compare that to say a simple “dotnet” command. Running it without any flags gives you atleast some helpful information on possible options to get things up and running.

C:\Users\wadeg> dotnet

Usage: dotnet [options]
Usage: dotnet [path-to-application]

Options:
  -h|--help         Display help.
  --info            Display .NET information.
  --list-sdks       Display the installed SDKs.
  --list-runtimes   Display the installed runtimes.

path-to-application:
  The path to an application .dll file to execute.

But all that’s about to change with Microsoft’s new library called System.CommandLine!

Creating A Simple Console App The Old Fashioned Way

Before we go digging into the new goodies. Let’s take a look at how we might implement a simple console application parsing the string args ourselves.

Here’s a console application I created earlier that simply greets a user with their given name, title, and will change the greeting depending on if we pass in a flag saying it’s the evening.

static int Main(string[] args)
{
    string name = string.Empty;
    string title = string.Empty;
    bool isEvening = false;

    for (int i = 0; i < args.Length; i++)
    {
        var arg = args[i].ToLower();
        if (arg == "--name")
        {
            name = args[i + 1];
        }

        if (arg == "--title")
        {
            title = args[i + 1];
        }

        if (arg == "--isevening")
        {
            isEvening = true;
        }
    }

    if (string.IsNullOrEmpty(name))
    {
        Console.WriteLine("--name is a required flag");
        return -1;
    }

    var greeting = isEvening ? "Good evening " : "Good day ";
    greeting += string.IsNullOrEmpty(title) ? string.Empty : title + " ";
    greeting += name;
    Console.WriteLine(greeting);

    return 0;
}

The code is actually quite simple, but let’s take a look at it bit by bit.

I’ve had to create a sort of loop over the args to work out which ones were actually passed in by the user, and which ones weren’t. Because the default args doesn’t actually distinguish between what’s a flag and what’s a passed in parameter value, this is actually quite messy.

I’ve also had to write my own little validator for the “–name” flag because I want this to be mandatory. But there’s a small problem with this..

How can a user know that the name flag is mandatory other than trial and error? Really they can’t. They would likely run the application once, have it fail, and then add name to try again. And for our other flags, how does a user know that these are even an option? We would have to rely on us writing good documentation and hope that the user reads it before running (Very unlikely these days!).

There really isn’t any inbuilt help with this application, we could try and implement something that if a user passed in a –help flag, we would return some static text to help them work out how everything runs, but this isn’t self documenting and would need to be updated each time a flag is updated, removed or added.

The reality is that in most cases, this sort of helpful documentation is not created. And in some ways, it’s relegated C# console applications to be some sort of quick and dirty application you build for other power users, but not for a general everyday developer.

Adding System.CommandLine

System.CommandLine is actually in beta right now. To install the current beta in your application you would need to run the following from your Package Manager Console

Install-Package System.CommandLine -Version 2.0.0-beta1.20574.7

Or alternatively if you’re trying to view it via the Nuget Browser in Visual Studio, ensure you have “Include prerelease” ticked.

Of course by the time you are reading this, it may have just been released and you can ignore all that hassle and just install it like you would any other Nuget package!

I added the nuget package into my small little greeter application, and rejigged the code like so :

static int Main(string[] args)
{
    var nameOption = new Option(
            "--name",
            description: "The person's name we are greeting"
        );
    nameOption.IsRequired = true;

    var rootCommand = new RootCommand
    {
        nameOption, 
        new Option(
            "--title",
            description: "The official title of the person we are greeting"
        ),
        new Option(
            "--isevening",
            description: "Is it evening?"
        )
    };
    rootCommand.Description = "A simple app to greet visitors";

    rootCommand.Handler = CommandHandler.Create<string, string, bool>((name, title, isEvening) =>
    {
        var greeting = isEvening ? "Good evening " : "Good day ";
        greeting += string.IsNullOrEmpty(title) ? string.Empty : title + " ";
        greeting += name;
        Console.WriteLine(greeting);
    });

    return rootCommand.Invoke(args);
}

Let’s work through this.

Unfortunately, for some reason the ability to make an option “required” cannot be done through an option constructor, hence why our first option for –name has been setup outside our root command. But again, your mileage may vary as this may be added before the final release (And it makes sense, this is probably going to be a pretty common requirement to make things as mandatory).

For the general setup of our flags in code, it’s actually pretty simple. We say what the flag name is, a description, and we can even give it a type right off the bat so that it will be parsed before getting to our code.

We are also able to add a description to our application which I’ll show shortly why this is important.

And finally, we can add a handler to our command. The logic within this handler is exactly the same as our previous application, but everything has been set up for us and passed in.

Before we run everything, what happens if we just say run the application with absolutely no flags passed in.

Option '--name' is required.

CommandLineExample:
  A simple app to greet visitors

Usage:
  CommandLineExample [options]

Options:
  --name <name> (REQUIRED)    The person's name we are greeting
  --title <title>             The official title of the person we are greeting
  --isevening                 Is it evening?
  --version                   Show version information
  -?, -h, --help              Show help and usage information

Wow! Not only has our required field thrown up an error, but we’ve even been given the full gamut of flags available to us. We’ve got our application description, each flag, and each flags description of what it’s intended to do. If we run our application with the –help flag, we would see something similar too!

Of course there’s only one thing left to do

CommandLineExample.exe --name Wade
Good Day Wade

Pretty powerful stuff! I can absolutely see this becoming part of the standard .NET Core Console Application template. There would almost be no reason to not use it from now on. At the very least, I could see it becoming a checkbox when you create a Console Application inside Visual Studio to say if you want “Advanced Arguments Management” or similar, it really is that good!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

An XML External Entity vulnerability (Or XXE for short) is a type of vulnerability that exploits weaknesses (Or more so features) in how external entities are loaded when parsing XML in code. Of course, OWASP has a great guide on it here, but in it’s most basic form, we can trick code into loading an external resource (Either a file on the target machine, or even a remote page on the same network) and giving us that information in some way.

For example, consider an ecommerce application allows you to update a production description by submitting the following XML to the server :

<product id="1">
    <description>What a great product!</description>
</product>

Then consider the following payload :

<!DOCTYPE foo [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]>
<product id="1">
    <description>&xxe;</description>
</product>

That may look confusing but essentially what we are doing is creating an internal variable called “xxe”, and storing the contents of the local password file (on linux) into it. Then we are setting the production description to that variable. Once completed, our production description will now leak all of the systems passwords.

It’s not just local files either, if a machine has access to internal only websites, then this could also be leveraged :

<!DOCTYPE foo [ <!ENTITY xxe SYSTEM "http://someinternalwebsite"> ]>
<product id="1">
    <description>&xxe;</description>
</product>

Not many people realize that many XML parsers have the “feature” to reach out and load external entities and pull them into the XML, but very clearly, it’s a huge security risk. So much so that in 2020, XXE attacks were ranked number 4 in OWASP’s top 10 web application security list. Ouch!

Testing XXE In .NET Core

So it got me thinking for .NET Core, how could I test under what circumstances XXE can actually occur. After all, like SQL Injection, I always hear people say “Well that’s not relevant anymore, the framework protects you”. But does it really? And even if it does by default, how easy is it to shoot yourself in the foot?

My first step was to setup a testing rig to try out various pieces of code and see if they fit. It was actually rather simple. First I created a static class that allowed me to pass in a method that parses XML, and then I could validate whether that method was safe or not.

public static class AssertXXE
{
    private static string _xml = "<!DOCTYPE foo [<!ENTITY xxe SYSTEM \"_EXTERNAL_FILE_\">]> <product id=\"1\"> <description>&xxe;</description></product>";

    public static void IsXMLParserSafe(Func<string, string> xmlParser, bool expectedToBeSafe)
    {
        var externalFilePath = Path.GetFullPath("external.txt");
        var xml = _xml.Replace("_EXTERNAL_FILE_", externalFilePath);
        var parsedXml = xmlParser(xml);

        var containsXXE = parsedXml.Contains("XXEVULNERABLE");

        Assert.AreEqual(containsXXE, !expectedToBeSafe);
    }
}

You may ask why I should pass in a boolean as to whether something is safe or not. I debated this. When I find an unsafe way of parsing XML, I didn’t want the test to “fail” per say. Because it became hard to figure out which were failing because they *should* fail, and which ones should fail because I made a simple syntax error. This way, once I found a vulnerable way of loading XML, I could then simply mark it that in future, I expect it to always be unsafe.

Onto the actual tests themselves, they were pretty simple like so :

[Test]
public void XmlDocument_WithDefaults_Safe()
{
    AssertXXE.IsXMLParserSafe((string xml) =>
    {
        var xmlDocument = new XmlDocument();
        xmlDocument.LoadXml(xml);
        return xmlDocument.InnerText;
    }, true);
}

And so on. But onto the actual results…

Testing XmlDocument

The XmlDocument type in C# is “mostly” safe. Talking strictly .NET Framework 4.5.2 onwards (Including into .NET Core), the default setup of an XML Document was safe. So for example, this is not a vulnerable test :

[Test]
public void XmlDocument_WithDefaults_Safe()
{
    AssertXXE.IsXMLParserSafe((string xml) =>
    {
        var xmlDocument = new XmlDocument();
        xmlDocument.LoadXml(xml);
        return xmlDocument.InnerText;
    }, true);
}

However, providing an XMLResolver to your XMLDocument made it eager to please and would download external entities. So this for example, would be unsafe :

var xmlDocument = new XmlDocument();
xmlDocument.XmlResolver = new XmlUrlResolver(); //<-- This!
xmlDocument.LoadXml(xml);
return xmlDocument.InnerText;

Remember how I mentioned that .NET Framework 4.5.2 > was safe? That’s because from that point, the XMLResolver was defaulted to null whereas earlier versions had a default resolver already set with the default XmlDocument constructor.

But for my use case, using XmlDocument in .NET Core with the defaults is not vulnerable to XXE.

Testing XmlReader

Next I took a look at XmlReader. Generally speaking, you can tie in an XmlReader to read a document, but then parse on any manipulation to a second class. So what I wanted to test was if I was using an XmlReader, and passing it to an XmlDocument class that was vulnerable, could the reader stop the disaster before it even got to the XmlDocument?

The answer was yes! Setting DtdProcessing to Prohibit would actually throw an exhibition when parsing the XML, and not allow processing to continue. Prohibit is also the default behaviour which was great!

XmlReaderSettings settings = new XmlReaderSettings();
settings.DtdProcessing = DtdProcessing.Prohibit;
settings.MaxCharactersFromEntities = 6000;

using (MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(xml)))
{
    XmlReader reader = XmlReader.Create(stream, settings);

    var xmlDocument = new XmlDocument();
    xmlDocument.XmlResolver = new XmlUrlResolver();
    xmlDocument.Load(reader);
    return xmlDocument.InnerText;
}

This also held true if I set DtdProcessing to ignore like so :

settings.DtdProcessing = DtdProcessing.Ignore;

Although I would get the following exception because instead of simply stopping parsing, it would still try and parse the document, but ignore all entity declarations.

Reference to undeclared entity 'xxe'.

Interestingly, to make XmlReader unsafe I had to do two things. First, I have to make DtdProcessing be set to “Parse” *and* I had to set a UrlResolver up :

XmlReaderSettings settings = new XmlReaderSettings();
settings.DtdProcessing = DtdProcessing.Parse;
settings.XmlResolver = new XmlUrlResolver();

Without these settings on the reader, even if the resulting stream was passed to an XmlDocument with a Resolver setup, it was still not vulnerable.

Getting Involved

For my particular use cases, what I found was that the way in which I use XmlDocument in .NET Core was safe. I never manually set an XmlResolver up, so I was good to go. But maybe you’re using a different way to parse XML? Maybe you’re even using a third party library to work with XML?

For this, I’ve thrown up my code that I used to test my scenarios on Github. You can access it here : https://github.com/mindingdata/XXEDotNetCore

If you, or the company you work for parse XML a different way, I really encourage you to add a PR on whether it is safe or unsafe for XXE. Again, this harks back to what I said earlier that so many of these OWASP top 10 security issues, developers like to say “Oh, that’s an old thing, it’s not a problem anymore”. And maybe for the majority of use cases that’s true, but it really doesn’t hurt to rig up your code and actually prove that’s the case!

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I’ve recently been diving into the new Channel type in .NET Core, and something I’ve noticed time and time again is how much effort goes into making sure the entire type is threadsafe. That is, if two threads are trying to act on the same object, they are synchronized one after the other instead of just being a free for all. In Microsoft’s case with Channel<T>, they use a combination of the lock keyword, async tasks, and a “queue” to obtain locks.

It somewhat belies belief that at the end of the day, to call something “threadsafe”, you have to write 100’s of lines of code that don’t actually provide any function except trying to make sure you don’t shoot yourself in the foot with a simple multithreaded scenario. And then there’s the fact that if you get it wrong, you probably won’t know until weird errors start appearing in your production logs that you can never seem to reproduce in development because you haven’t been able to hit the race condition lottery.

And then I came across the Postsharp Threading Library

PostSharp Multithreading Library

To be honest, they had me from the moment I read this beauty of a tag line :

Write verifiable thread-safe code in .NET without your brain exploding with PostSharp Threading

Sounds good to me!

PostSharp Threading is actually part of an entire suite of libraries from PostSharp that work on removing common boilerplate scenarios that I’m almost certain every .NET Developer has run into before. They have solutions for caching, logging, MVVM, and of course, threading. For today, I’m just going to focus on the threading library as that’s been boggling my mind for the past couple of weeks. Let’s jump right in!

Using Locks To Synchronize Multithreaded Data Access In C#

I want to give a dead simply way in which you can wrap yourself in knots with multithreading that both the compiler and the runtime may not make you aware of at first (If ever). Take the example code :

class Program
{
    static void Main(string[] args)
    {
        MyClass myClass = new MyClass();
        List<Task> tasks = new List<Task>();

        for(int i=0; i < 100; i++)
        {
            tasks.Add(
                Task.Run(() =>
                {
                    for (int x = 0; x < 100; x++)
                    {
                        myClass.AddMyValue();
                    }
                })
            );
        }

        Task.WaitAll(tasks.ToArray());

        Console.WriteLine(myClass.GetMyValue());
    }
}

class MyClass
{
    private int myValue = 0;

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        return myValue;
    }
}

Hopefully it’s not too confusing. But let’s talk about some points :

  1. I have a class called “MyClass” that has an integer value, and a method to add 1 to the value.
  2. In my main method, I start 100 threads (!!!) and all these threads do is loop 100 times, adding 1 to the value of myClass.
  3. myClass is shared, so each thread is accessing the same object.
  4. I wait until the threads are all finished.
  5. Then I output the value of myClass.

Any guesses what the output of this program will be? Thinking logically, 100 threads, looping 100 times, we should see the application output 10000. Well I ran this little application 5 times and recorded the results.

6104
8971
9043
9256
8833

Oof, what’s going on here? We have a classic multithreading issue. Two (or more) threads are trying to update a value at the same time, resulting in us getting a complete meltdown when it comes to actually incrementing our value.

So how would we solve this *without* PostSharp threading?

At first it actually seems quite simple, we simply wrap our increment in a lock like so :

public void AddMyValue()
{
    lock (this)
    {
        myValue++;
    }
}

If we run our application now..

10000

Perfect!

But there are some downsides to this, and both are issues with maintainability.

  1. What if we have multiple methods in our class? And multiple classes? We now need to spend an afternoon adding locks to all methods.
  2. What if a new developer comes along, and adds a new method? How do they know that this class is used in multithreaded scenarios requiring locks? Same goes for yourself. You need to remember to wrap *every* method in locks now if you want to keep this class threadsafe! You very easily could have a brain fade moment, not realize that you need to add locks, and then only once things hit production do you start seeing weird problems.

Using The PostSharp Synchronized Attribute

So how can PostSharp help us? Well all we do is add the following nuget package :

Add-Package PostSharp.Patterns.Threading

Then we can modify our class like so :

[Synchronized]
class MyClass
{
    private int myValue = 0;

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        return myValue;
    }
}

Notice all we did was add the [Synchronized] attribute to our class and nothing else. This attribute automatically wraps all our methods in a lock statement, making them threadsafe. If we run our code again, we get the same correct result, same as using locks,  but without having to modify every single method, and without having to remember to add locks when a new method is added to the class.

You might expect some big long speel here about how all of this works behind the scenes, but seriously.. It. Just. Works. 

Using A Reader/Writer Model For Multithreaded Access

In our previous example, we used the Synchronized attribute to wrap all of our class methods in locks. But what about if some of them are actually safe to read concurrently? Take the following code example :

class Program
{
    static void Main(string[] args)
    {
        MyClass myClass = new MyClass();
        List tasks = new List();

        for(int i=0; i < 100; i++) { tasks.Add( Task.Run(() =>
                {
                    for (int x = 0; x < 100; x++)
                    {
                        myClass.AddMyValue();
                    }
                })
            );
        }

        Task.WaitAll(tasks.ToArray());

        //Now kick off 10 threads to read the value 10 times (Asd an example!)
        tasks.Clear();

        for(int i=0; i < 10; i++) { tasks.Add(Task.Run(() => { var myValue = myClass.GetMyValue(); }));
        }

        Task.WaitAll(tasks.ToArray());

    }
}

[Synchronized]
class MyClass
{
    private int myValue { get; set; }

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        //Block the thread by sleeping for 1 second. 
        //This is just to simulate us actually doing work. 
        Thread.Sleep(1000);
        return myValue;
    }
}

I know this is a pretty big example but it should be relatively easy to follow as it’s just an extension of our last example.

In this example, we are incrementing the value in a set of threads, then we kick off 10 readers to read the value back to us. When we run this app, we may expect it to complete in roughly 1 second. After all, the only delay is that in our GetMyValue method, there is a sleep of 1000ms. However, these are all on Tasks so we should expect them to all complete roughly at the same time.

However, clearly we have also marked the class as Synchronized and that applies a lock to *all* methods, even ones that we are fairly certain won’t have issues being threadsafe. In our example, there is no danger in allowing GetMyValue() to run across multiple threads at the same time. This is quite commonly referred to as a Reader/Writer problem, that is generally solved by a “Reader/Writer Lock”.

The concept of a Reader/Writer lock can be simplified to the following :

  1. We will allow any number of readers concurrent access to read methods without blocking each other.
  2. A writer requires exclusive lock (Including blocking readers), until the writer is completed, then either all readers or another writer can gain access to the object.

This works perfect for us because at the end of our application, we want to allow all readers to have access to the value at once without blocking each other. So how can we achieve that? Actually it’s pretty simple!

[ReaderWriterSynchronized]
class MyClass
{
    private int myValue { get; set; }

    [Writer]
    public void AddMyValue()
    {
        myValue++;
    }

    [Reader]
    public int GetMyValue()
    {
        //Block the thread by sleeping for 1 second. 
        //This is just to simulate us actually doing work. 
        Thread.Sleep(1000);
        return myValue;
    }
}

We change our Synchronized attribute to a “ReaderWriterSynchronized”, we then go through and we mark each method noting whether it is a writer (So requires exclusive access), or a reader (Allows concurrent access).

Running our application again, we can now see it completes in 1 second as opposed to 10 as it’s now allowing GetMyValue() to be run concurrently across threads. Perfect!

Solving WPF/Winform UI Thread Updating Issues

I almost exclusively work with web applications these days, but I can still remember the days of trying to do multithreading on both Winform and WPF applications. If you’ve ever tried it, how often have you run into the following exception :

System.InvalidOperationException: Cross-thread operation not valid: Control ‘labelStatus’ accessed from a thread other than the thread it was created on.

It can be from something as simple as so in a Winform App :

private void buttonUpdate_Click(object sender, EventArgs e)
{
    Task.Run(() => UpdateStatus("Update"));
}

private void UpdateStatus(string text)
{
    try
    {
        labelStatus.Text = text;
    }catch(Exception ex)
    {
        MessageBox.Show(ex.ToString());
    }
}

Note that the whole try/catch with a MessageBox is just so that the exception is actually shown without the Task swallowing the exception. Otherwise in some cases we may not even see the exception at all, instead it just silently fails and we don’t see the label text update and wonder what the heck is going on.

The issue is quite simple. In both Winform and WPF, the controls can only be updated from the “UI Thread”. So any background thread (Whether a thread, task or background worker) needs to sort of negotiate the update back into main UI thread. For WinForms, we can use delegates with Invoke, and for WPF/XAML, we have to use the Dispatcher class. But both require us to write an ungodly amount of code just to do something as simple as update a label.

I would also note that sometimes you see people recommend adding the following line of code somewhere in your application :

CheckForIllegalCrossThreadCalls = false;

This is a terrible idea and you should never do it. This is basically hiding the error from you but the problem of two threads simultaneously trying to update/use a control still exists!

So how does PostSharp resolve this?

[Dispatched]
private void UpdateStatus(string text)
{
    try
    {
        labelStatus.Text = text;
    }catch(Exception ex)
    {
        MessageBox.Show(ex.ToString());
    }
}

With literally *one* attribute of course. You simply mark which methods need to be ran on the UI thread, and that’s it! And let me just say one thing, while yes at some point in your C# career you need to do a deep dive on delegates/actions and marshalling calls, I really wish I had this early on in my developer life so I didn’t have to spend hours upon hours writing boilerplate code just to update a label or change the color of a textbox!

Who Is This Library For?

I think if your code is kicking off tasks at any point (Especially if you are doing background work in a Winform/WPF environment), then I think giving PostSharp Threading a try is a no brainer. There is actually even more features in the library than I have listed here including a way to make objects immutable, freeze objects, and even be able to mark objects as unsafe for multithreading just to stop a future developer shooting themselves in the foot.

Give it a try and drop a comment below on how you got on.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on Channel in .NET. Of course, it’s always better to start at Part 1, but you can skip anywhere you’d like using the links below.

Part 1 – Getting Started
Part 2 – Advanced Channels
Part 3 – Understanding Back Pressure


Up until this point, we have been using what’s called an “Unbounded” Channel. You’ll notice it when we create the channel, we do something like so :

var myChannel = Channel.CreateUnbounded<int>();

But actually, we can do something like :

var myChannel = Channel.CreateBounded<int>(1000);

This isn’t too dissimilar from creating another collection type such as a List or an Array that has a limited capacity. In our example, we’ve created a channel that will hold at most 1000 items. But why limit ourselves? Well.. That’s where Back Pressure comes in.

What Is Back Pressure?

Back Pressure in computing terms (Especially when it comes to messaging/queuing) is the idea that resources (Whether it be things like memory, ram, network capacity or for example an API rate limit on a required external API) are limited. And we should be able to apply “pressure” back up the chain to try and relieve some of that load. At the very least, let others know in the ecosystem that we are under load and we may take some time to process their requests.

Generally speaking, when we talk about back pressure with queues. Almost universally we are talking about a way to tell anyone trying to add more items in the queue that either they simply cannot enqueue any more items, or that they need to back off for a period of time. More rarely, we are talking about queues purely dropping messages once we reach a certain capacity. These cases are rare (Since generally you don’t want messages to simply die), but we do have the option.

So how does that work with .NET channels?

Back Pressure Options For Channels

We actually have a very simple way of adding back pressure when using Channels. The code looks like so :

var channelOptions = new BoundedChannelOptions(5)
{
    FullMode = BoundedChannelFullMode.Wait
};

var myChannel = Channel.CreateBounded<int>(channelOptions);

We can specify the following Full Modes :

Wait
Simply make the caller wait before turning on a WriteAsync() call.

DropNewest/DropOldest
Either drop the oldest or the newest items in the channel to make room for the item we want to add.

DropWrite
Simply dump the message that we were supposed to write.

There are also two extra pieces of code you should be aware of.

You can call WaitToWriteAsync() :

await myChannel.Writer.WaitToWriteAsync();

This let’s us “wait out” the bounded limits of the channel. e.g. While the channel is full, we can call this to simply wait until there is space. This means that even if there is a DropWrite FullMode turned on, we can limit the amount of messages we are dropping on the ground by simply waiting until there is capacity.

The other piece of code we should be aware of is :

var success = myChannel.Writer.TryWrite(i);

This allows us to try and write to the queue, and return whether we were successful or not. It’s important to note that this method is not async. Either we can write to the channel or not, there is no “Well.. You maybe could if you waited a bit longer”.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on Channel in .NET. Of course, it’s always better to start at Part 1, but you can skip anywhere you’d like using the links below.

Part 1 – Getting Started
Part 2 – Advanced Channels
Part 3 – Understanding Back Pressure


In our previous post we looked at some dead simple examples of how Channels worked, and we saw some pretty nifty features, but for the most part it was pretty similar to any other XQueue implementation. So let’s dive into some more advanced topics. Well.. I say advanced but so much of this is dead simple. This might read like a bit of a feature run through but there is a lot to love!

Separation Of Read/Write Concerns

If you’ve ever shared a Queue between two classes, you’ll know that either class can read/write, even if they aren’t supposed to. For example :

class MyProducer
{
    private readonly Queue<int> _queue;

    public MyProducer(Queue<int> queue)
    {
        _queue = queue;
    }
}

class MyConsumer
{
    private readonly Queue<int> _queue;

    public MyConsumer(Queue<int> queue)
    {
        _queue = queue;
    }
}

So while a Producer is supposed to only write to the queue, and a Consumer is supposed to only read, in both cases they can do all operations on the queue. While you might in your own head want the Consumer to only read, another developer might come along and quite happily start calling Enqueue and there’s nothing but a code review to stop them making that mistake.

But with Channels, we can do things differently.

class Program
{
    static async Task Main(string[] args)
    {
        var myChannel = Channel.CreateUnbounded<int>();
        var producer = new MyProducer(myChannel.Writer);
        var consumer = new MyConsumer(myChannel.Reader);
    }
}

class MyProducer
{
    private readonly ChannelWriter<int> _channelWriter;

    public MyProducer(ChannelWriter<int> channelWriter)
    {
        _channelWriter = channelWriter;
    }
}

class MyConsumer
{
    private readonly ChannelReader<int> _channelReader;

    public MyConsumer(ChannelReader<int> channelReader)
    {
        _channelReader = channelReader;
    }
}

In this example I’ve added a main method to show you how the creation of the writer/reader happen, but it’s dead simple. So here we can see that for our Producer, I’ve passed it only a ChannelWriter, so it can only do write operations. And for our Consumer, we’ve passed it a ChannelReader so it can only read.

Of course it doesn’t mean that another developer can’t just modify the code and start injecting the root Channel object, or passing in both the ChannelWriter/ChannelReader, but it atleast outlays much better what the intention of the code is.

Completing A Channel

We saw earlier that when we call ReadAsync() on a channel, it will actually sit there waiting for messages, but what if there isn’t any more messages coming? Maybe this is a one time batch job and the batch is completed. Normally with other Queues in .NET, we would have to have some sort of shared boolean and/or a CancellationToken be passed around. But with Channels, it’s even easier.

Consider the following :

static async Task Main(string[] args)
{
    var myChannel = Channel.CreateUnbounded<int>();

    _ = Task.Factory.StartNew(async () =>
    {
        for (int i = 0; i < 10; i++)
        {
            await myChannel.Writer.WriteAsync(i);
        }

        myChannel.Writer.Complete();
    });

    try
    {
        while (true)
        {
            var item = await myChannel.Reader.ReadAsync();
            Console.WriteLine(item);
            await Task.Delay(1000);
        }
    }catch(ChannelClosedException e)
    {
        Console.WriteLine("Channel was closed!");
    }
}

I’ve made it so that our second thread writes to our channel as fast as possible, then completes it. Then our reader slowly reads with a delay of 1 second between reads. Notice that we catch the ChannelClosedExecption, this is called when you try and read from the closed channel *after* the final message.

I just want to make that clear. Calling Complete() on a channel does not immediately close the channel and kill everyone reading from it. It’s instead a way to say to notify any readers that once the last message is read, we’re done. That’s important because it means it doesn’t matter if the Complete() is called while we are waiting for new items, while the queue is empty, while it’s full etc. We can be sure that we will complete all available work then finish up.

Using IAsyncEnumerable With Channels

If we take our example when we try and close a channel, there are two things that stick out to me.

  1. We have a while(true) loop. And this isn’t really that bad, but it’s a bit of an eyesore.
  2. To break out of this loop, and to know that the channel is completed, we have to catch an exception and essentially swallow it.

These problems are solved using the command “ReadAllAsync()” that returns an IAsyncEnumerable (A bit more on how IAsyncEnumerable works right here). The code looks a bit like so :

static async Task Main(string[] args)
{
    var myChannel = Channel.CreateUnbounded<int>();

    _ = Task.Factory.StartNew(async () =>
    {
        for (int i = 0; i < 10; i++)
        {
            await myChannel.Writer.WriteAsync(i);
        }

        myChannel.Writer.Complete();
    });

    await foreach(var item in myChannel.Reader.ReadAllAsync())
    {
        Console.WriteLine(item);
        await Task.Delay(1000);
    }
}

Now the code reads a lot better and removes some of the extra gunk around catching the exception. Because we are using an IAsyncEnumerable, we can still wait on each item like we previously did, but we no longer have to catch an exception because when the channel completes, it simply says it has nothing more and the loop exits.

Again, this gets rid of some of the messy code you used to have to write when dealing with queues. Where previously you had to write some sort of infinite loop with a breakout clause, now it’s just a real tidy loop that handles everything under the hood.

What’s Next

So far, we’ve been using “Unbounded” channels. And as you’ve probably guessed, of course there is an option to use a BoundedChannel instead. But what is this? And how does the term “back pressure” relate to it? Check out the next part of this series on better understanding of back pressure.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on Channel in .NET. Of course, it’s always better to start at Part 1, but you can skip anywhere you’d like using the links below.

Part 1 – Getting Started
Part 2 – Advanced Channels
Part 3 – Understanding Back Pressure


I’ve recently been playing around with the new Channel<T> type that was introduced in .NET Core 3.X. I think I played around with it when it was first released (along with pipelines), but the documentation was very very sparse and I couldn’t understand how they were different from any other queue.

After playing around with them, I can finally see the appeal and the real power they posses. Most notable with large asynchronous background operations that need almost two way communication to synchronize what they are doing. That sentence is a bit of a mouthful, but hopefully by the end of this series it will be clear when you should use Channel<T>, and when you should use something more basic like Queue<T>.

What Are Channels?

At it’s heart, a Channel is essentially a new collection type in .NET that acts very much like the existing Queue<T> type (And it’s siblings like ConcurrentQueue), but with additional benefits. The problem I found when really trying to research the subject is that many existing external queuing technologies (IBM MQ, Rabbit MQ etc) have a concept of a “channel” and they range from describing it as a completely abstract thought process vs being an actual physical type in their system.

Now maybe I’m completely off base here, but if you think about a Channel in .NET as simply being a Queue with additional logic around it to allow it to wait on new messages, tell the producer to hold up because the queue is getting large and the consumer can’t keep up, and great threadsafe support, I think it’s hard to go wrong.

Now I mentioned a bit of a keyword there, Producer/Consumer. You might have heard of this before and it’s sibling Pub/Sub. They are not interchangeable.

Pub/Sub describes that act of someone publishing a message, and one or many “subscribers” listening into that message and acting on it. There is no distributing of load because as you add subscribers, they essentially get a copy of the same messages as everyone else.

In diagram form, Pub/Sub looks a bit like this :

Producer/Consumer describes the act of a producer publishing a message, and there being one or more consumers who can act on that message, but each message is only read once. It is not duplicated out to each subscriber.

And of course in diagram form :

Another way to think about Producer/Consumer is to think about you going to a supermarket checkout. As customers try to checkout and the queue gets longer, you can simply open more checkouts to process those customers. This little thought process is actually important because what happens if you can’t open any more checkouts? Should the queue just keep getting longer and longer? What about if a checkout operator is sitting there but there are no customers? Should they just pack it in for the day and go home or should they be told to just sit and wait until there is customers.

This is often called the Producer-Consumer problem and one that Channels aims to fix.

Basic Channel Example

Everything to do with Channels lives inside the System.Threading.Channels. In later versions this seems to be bundled with your standard .NET Core project, but if not, a nuget package lives here : https://www.nuget.org/packages/System.Threading.Channels.

A extremely simple example for channels would look like so :

static async Task Main(string[] args)
{
    var myChannel = Channel.CreateUnbounded();

    for(int i=0; i < 10; i++)
    {
        await myChannel.Writer.WriteAsync(i);
    }

    while(true)
    {
        var item = await myChannel.Reader.ReadAsync();
        Console.WriteLine(item);
    }
}

There’s not a whole heap to talk about here. We create an “Unbounded” channel (Which means it can hold infinite items, but more on that further in the series). And we write 10 items and read 10 items, at this point it’s not a lot different from any other queue we’ve seen in .NET.

Channels Are Threadsafe

That’s right, Channels are threadsafe. Meaning that multiple threads can be reading/writing to the same channel without issue. If we take a peek at the Channels source code here, we can see that it’s threadsafe because it uses a combination of locks and an internal “queue” to synchronise readers/writers to read/write one after the other.

In fact, the intended use case of Channels is multi threaded scenarios. For example, if we take our basic code from above, there is actually a bit of overhead in maintaining our threadsafe-ness when we actually don’t need it. So we are probably better off just using a Queue<T> in that instance. But what about this code?

static async Task Main(string[] args)
{
    var myChannel = Channel.CreateUnbounded();

    _ = Task.Factory.StartNew(async () =>
    {
        for (int i = 0; i < 10; i++)
        {
            await myChannel.Writer.WriteAsync(i);
            await Task.Delay(1000);
        }
    });

    while(true)
    {
        var item = await myChannel.Reader.ReadAsync();
        Console.WriteLine(item);
    }
}

Here we have a separate thread pumping messages in, while our main thread reads the messages out. The interesting thing you’ll notice is that we’ve added a delay between messages. So how come we can call ReadAsync() and things just…. work? There is no TryDequeue or Dequeue and it runs null if there are no messages in the queue right?

Well the answer is that a Channel Reader’s “ReadAsync()” method will actually *wait* for a message (but not *block*). So you don’t need to do some ridiculously tight loop while you wait for messages, and you don’t need to block a thread entirely while waiting. We’ll talk about this more in upcoming posts, but just know you can use ReadAsync to basically await a new message coming through instead of writing some custom tightly wound code to do the same.

What’s Next?

Now that you’ve got the basics down, let’s look at some more advanced scenarios using Channels.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Azure’s Key vault is a great secret store with excellent support in .NET (obviously!). But I recently ran into an issue that sent me in circles trying to work out how to load certificates that have been loaded into Key Vault, from .NET. Or more specifically, there is a bunch of gotchas when loading certificates into a .NET/.NET Core application running in Azure App Service. Which given both are Azure services, you’ll probably run into one time or another.

But first, let’s just talk about the code to load a certificate from Key vault in general.

C# Code To Load Certificates From Keyvault

If you’ve already got a Key vault instance (Or have newly created one), you’ll need to ensure that you, as in your login to Azure, has been added to the access policy for the Key vault.

A quick note on Access Policies in general. They can become pretty complex and Key vault recently added the ability to use role based authentication and a few other nifty features. You can even authenticate against KV using a local certificate on the machine. I’m going to describe how I generally use it for my own projects and other small teams, which involves Managed Identity, but if this doesn’t work for you, you’ll need to investigate the best way of authenticating individuals against Key vault.

Back to the guide. If you created the Key vault yourself, then generally speaking you are automatically added to the access policy. But you can check by looking at the Key vault instance, and checking Access Policies under Settings and ensuring that your user has access.

Next, on your local machine, you need to login to Azure because the .NET Code actually uses the managed identity to gain access to Keyvault. To do that we need to run a couple of Powershell commands.

First, run the command

az login

This will pop up a browser window asking you to login to Azure. Complete this, and your powershell window should update with the following :

You have logged in. Now let us find all the subscriptions to which you have access...
[.. All your subscriptions listed here..]

Now if you only have one subscription, you’re good to go. If you have multiple then you need to do something else :

az account set --subscription "YOUR SUBSCRIPTION NAME THAT HAS KEYVAULT"

The reason you need to do this is once logged into Azure, you only have access to one subscription at a time. If you have multiple subscriptions you need to set the subscription that contains your keyvault instance as your “current” one.

Finally onto the C# code.

Now obviously you’ll want to turn this into a helpful service with a re-useable method, but the actual C# code is simple. Here it is in one block :

var _keyVaultName = $"https://YOURKEYVAULTNAME.vault.azure.net/";
var secretName = "YOURCERTIFICATENAME";
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var _client = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = await client.GetSecretAsync(_keyVaultName, secretName);
var privateKeyBytes = Convert.FromBase64String(secret);
var certificate = new X509Certificate2(privateKeyBytes, string.Empty);

Again, people often leave comments like “Why don’t you load your keyvault name from App Settings?”. Well I do! But when I’m giving example code I want to break it down to the simplest possible example so that you don’t have to deconstruct it, and rebuild it to suit your own application.

With that out of the way, notice that when we call Key Vault, we don’t actually call “GetCertificate”. We just ask to get a secret. If that secret is a text secret, then it will come through as plain text. If it’s a certificate, then actually it will be a Base64 string, which we can then turn into a certificate.

Also note that we aren’t providing any sort of “authentication” to this code, that’s because it uses our managed identity to talk to Key vault.

And we are done! This is all the C# code you need. Now if you’re hosting on Azure App Service.. then that’s a different story.

Getting It Working On Azure App Service

Now I thought that deploying everything to an Azure App Service would be the easy part. But as it turns out, it’s a minefield of gotchas.

The first thing is that you need to turn on Managed Identity for the App Service. You can do this by going to your App Service, then Settings => Identity. Turn on System Assigned identity and save.

Now when you go back to your Key vault, go to Access Policies and search for the name of your App Service. Then you can add permissions for your App Service as if it was an actual user getting permissions.

So if you’re loading certificates, there is 3 main gotchas, and all 3 will generate this error :

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: The system cannot find the file specified.

App Service Plan Level

You must be on a Basic plan or above for the App Service. It cannot be a shared instance (Either F1 or D1). The reason behind this is that behind the scenes, on windows, there is some “User Profile” gunk that needs to be loaded for certificates of any type to be loaded. This apparently does not work on shared plans.

Extra Application Setting

You must add an Application setting on the App Service called “WEBSITE_LOAD_USER_PROFILE” and set this to 1. This is similar to the above and is about the gunk that windows needs, but is apparently not loaded by default in Azure.

.

Extra Certificate Flags In C#

In your C# code, the only thing that seemed to work for me was adding a couple of extra flags when loading your certificate from the byte array. So we change this :

var certificate = new X509Certificate2(privateKeyBytes, string.Empty);

To this :

var certificate = new X509Certificate2(privateKeyBytes, string.Empty, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);

Finally, with all of these extras, you should be able to load a certificate from Key vault into your Azure App Service!

Further Troubleshooting

If you get the following error :

Microsoft.Azure.KeyVault.Models.KeyVaultErrorException: Access denied

In almost all cases, the managed identity you are running under (either locally or in Azure App Service) does not have access to the Key vault instance. If you’re getting this when trying to develop locally, generally I find it’s because you’ve selected the wrong subscription after using az login. If you’re running this in an App Service, I find it’s typically because you haven’t set up the managed identity between the App Service and Key vault.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently wrote a piece of code that used a self signed certificate to then sign a web request. All went well until it came to unit testing this particular piece of code. I (incorrectly) assumed that I could just write something like so :

var certificate = new X509Certificate2();

And on the face of it, it works. But as soon as the certificate actually starts getting used, there is a lot of internals that suddenly break at runtime.

Luckily there’s actually a simple solution. We can generate our own self signed certificate, and simply “hard code” it in code to be used anywhere we need a certificate.

Generating The Self Signed Certificate Using Powershell

To get this working, we need to use Powershell. If you are on a non-windows machine, then you’ll need to work out how to generate a self signed cert (And get the Base64 encoded string) yourself, and then skip to step 2.

Back to powershell. In an *Administrator* powershell prompt, run the following :

New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname MySelfSignedCertificate -NotAfter (Get-Date).AddYears(10)

Note that this generates a certificate and installs it into your local machine (Which you can later remove). I couldn’t find a way in powershell to generate the certificate and *immediately* export it without having to first install it into a store location.

The result of this action should print a thumbprint. Save the thumbprint into a notepad because we’ll need it shortly.

The next piece of powershell we need to run is the following :

$password = ConvertTo-SecureString -String "Password" -Force -AsPlainText
Export-PfxCertificate -cert cert:\localMachine\my\{YOURTHUMBPRINTHERE} -FilePath $env:USERPROFILE\Desktop\MySelfSignedCertificate.pfx -Password $password

It creates a secure password (Change if you like but remember it!), and exports the certificate to your desktop as a pfx file. Almost there!

Finally, CD to your desktop and run :

$pfx_cert = get-content 'MySelfSignedCertificate.pfx' -Encoding Byte
$base64 = [System.Convert]::ToBase64String($pfx_cert)
Write-Host $base64

This will read the certificate from your local machine, and print out a long string as Base64. Save this into notepad and move onto the next step.

Loading Your Base64 Encoded Certificate

Once you have the certificate as a Base64 string, then actually, the rest is easy. Here’s the C# code you need.

public static X509Certificate2 GetClientTestingCertificate()
{
    string certificateString = "PUTYOURCERTIFICATESTRINGINHERE";
    var privateKeyBytes = Convert.FromBase64String(certificateString);
    var pfxPassword = "Password";
    return new X509Certificate2(privateKeyBytes, pfxPassword);
}

And that’s literally it. You will now be using a real certificate that doesn’t need to be installed on every single machine for unit tests to run.

I do want to stress that in most cases, this is purely for unit tests. You should not do this if you actually intend to be signing requests or using a certificate in any other “real” way. Unit tests only!

Cleaning Up (Windows)

If you followed the above steps to generate a certificate, then you’ll also want to go and delete your testing certificate from your local store. To do so, simply open MMC, view your certificate stores for your local computer, and delete the certificate named “MySelfSignedCertificate”.

ENJOY THIS POST?
Join over 3.000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.