Simplifying Multithreaded Scenarios With PostSharp Threading

I’ve recently been diving into the new Channel type in .NET Core, and something I’ve noticed time and time again is how much effort goes into making sure the entire type is threadsafe. That is, if two threads are trying to act on the same object, they are synchronized one after the other instead of just being a free for all. In Microsoft’s case with Channel<T>, they use a combination of the lock keyword, async tasks, and a “queue” to obtain locks.

It somewhat belies belief that at the end of the day, to call something “threadsafe”, you have to write 100’s of lines of code that don’t actually provide any function except trying to make sure you don’t shoot yourself in the foot with a simple multithreaded scenario. And then there’s the fact that if you get it wrong, you probably won’t know until weird errors start appearing in your production logs that you can never seem to reproduce in development because you haven’t been able to hit the race condition lottery.

And then I came across the Postsharp Threading Library

PostSharp Multithreading Library

To be honest, they had me from the moment I read this beauty of a tag line :

Write verifiable thread-safe code in .NET without your brain exploding with PostSharp Threading

Sounds good to me!

PostSharp Threading is actually part of an entire suite of libraries from PostSharp that work on removing common boilerplate scenarios that I’m almost certain every .NET Developer has run into before. They have solutions for caching, logging, MVVM, and of course, threading. For today, I’m just going to focus on the threading library as that’s been boggling my mind for the past couple of weeks. Let’s jump right in!

Using Locks To Synchronize Multithreaded Data Access In C#

I want to give a dead simply way in which you can wrap yourself in knots with multithreading that both the compiler and the runtime may not make you aware of at first (If ever). Take the example code :

class Program
{
    static void Main(string[] args)
    {
        MyClass myClass = new MyClass();
        List<Task> tasks = new List<Task>();

        for(int i=0; i < 100; i++)
        {
            tasks.Add(
                Task.Run(() =>
                {
                    for (int x = 0; x < 100; x++)
                    {
                        myClass.AddMyValue();
                    }
                })
            );
        }

        Task.WaitAll(tasks.ToArray());

        Console.WriteLine(myClass.GetMyValue());
    }
}

class MyClass
{
    private int myValue = 0;

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        return myValue;
    }
}

Hopefully it’s not too confusing. But let’s talk about some points :

  1. I have a class called “MyClass” that has an integer value, and a method to add 1 to the value.
  2. In my main method, I start 100 threads (!!!) and all these threads do is loop 100 times, adding 1 to the value of myClass.
  3. myClass is shared, so each thread is accessing the same object.
  4. I wait until the threads are all finished.
  5. Then I output the value of myClass.

Any guesses what the output of this program will be? Thinking logically, 100 threads, looping 100 times, we should see the application output 10000. Well I ran this little application 5 times and recorded the results.

6104
8971
9043
9256
8833

Oof, what’s going on here? We have a classic multithreading issue. Two (or more) threads are trying to update a value at the same time, resulting in us getting a complete meltdown when it comes to actually incrementing our value.

So how would we solve this *without* PostSharp threading?

At first it actually seems quite simple, we simply wrap our increment in a lock like so :

public void AddMyValue()
{
    lock (this)
    {
        myValue++;
    }
}

If we run our application now..

10000

Perfect!

But there are some downsides to this, and both are issues with maintainability.

  1. What if we have multiple methods in our class? And multiple classes? We now need to spend an afternoon adding locks to all methods.
  2. What if a new developer comes along, and adds a new method? How do they know that this class is used in multithreaded scenarios requiring locks? Same goes for yourself. You need to remember to wrap *every* method in locks now if you want to keep this class threadsafe! You very easily could have a brain fade moment, not realize that you need to add locks, and then only once things hit production do you start seeing weird problems.

Using The PostSharp Synchronized Attribute

So how can PostSharp help us? Well all we do is add the following nuget package :

Add-Package PostSharp.Patterns.Threading

Then we can modify our class like so :

[Synchronized]
class MyClass
{
    private int myValue = 0;

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        return myValue;
    }
}

Notice all we did was add the [Synchronized] attribute to our class and nothing else. This attribute automatically wraps all our methods in a lock statement, making them threadsafe. If we run our code again, we get the same correct result, same as using locks,  but without having to modify every single method, and without having to remember to add locks when a new method is added to the class.

You might expect some big long speel here about how all of this works behind the scenes, but seriously.. It. Just. Works. 

Using A Reader/Writer Model For Multithreaded Access

In our previous example, we used the Synchronized attribute to wrap all of our class methods in locks. But what about if some of them are actually safe to read concurrently? Take the following code example :

class Program
{
    static void Main(string[] args)
    {
        MyClass myClass = new MyClass();
        List tasks = new List();

        for(int i=0; i < 100; i++) { tasks.Add( Task.Run(() =>
                {
                    for (int x = 0; x < 100; x++)
                    {
                        myClass.AddMyValue();
                    }
                })
            );
        }

        Task.WaitAll(tasks.ToArray());

        //Now kick off 10 threads to read the value 10 times (Asd an example!)
        tasks.Clear();

        for(int i=0; i < 10; i++) { tasks.Add(Task.Run(() => { var myValue = myClass.GetMyValue(); }));
        }

        Task.WaitAll(tasks.ToArray());

    }
}

[Synchronized]
class MyClass
{
    private int myValue { get; set; }

    public void AddMyValue()
    {
        myValue++;
    }

    public int GetMyValue()
    {
        //Block the thread by sleeping for 1 second. 
        //This is just to simulate us actually doing work. 
        Thread.Sleep(1000);
        return myValue;
    }
}

I know this is a pretty big example but it should be relatively easy to follow as it’s just an extension of our last example.

In this example, we are incrementing the value in a set of threads, then we kick off 10 readers to read the value back to us. When we run this app, we may expect it to complete in roughly 1 second. After all, the only delay is that in our GetMyValue method, there is a sleep of 1000ms. However, these are all on Tasks so we should expect them to all complete roughly at the same time.

However, clearly we have also marked the class as Synchronized and that applies a lock to *all* methods, even ones that we are fairly certain won’t have issues being threadsafe. In our example, there is no danger in allowing GetMyValue() to run across multiple threads at the same time. This is quite commonly referred to as a Reader/Writer problem, that is generally solved by a “Reader/Writer Lock”.

The concept of a Reader/Writer lock can be simplified to the following :

  1. We will allow any number of readers concurrent access to read methods without blocking each other.
  2. A writer requires exclusive lock (Including blocking readers), until the writer is completed, then either all readers or another writer can gain access to the object.

This works perfect for us because at the end of our application, we want to allow all readers to have access to the value at once without blocking each other. So how can we achieve that? Actually it’s pretty simple!

[ReaderWriterSynchronized]
class MyClass
{
    private int myValue { get; set; }

    [Writer]
    public void AddMyValue()
    {
        myValue++;
    }

    [Reader]
    public int GetMyValue()
    {
        //Block the thread by sleeping for 1 second. 
        //This is just to simulate us actually doing work. 
        Thread.Sleep(1000);
        return myValue;
    }
}

We change our Synchronized attribute to a “ReaderWriterSynchronized”, we then go through and we mark each method noting whether it is a writer (So requires exclusive access), or a reader (Allows concurrent access).

Running our application again, we can now see it completes in 1 second as opposed to 10 as it’s now allowing GetMyValue() to be run concurrently across threads. Perfect!

Solving WPF/Winform UI Thread Updating Issues

I almost exclusively work with web applications these days, but I can still remember the days of trying to do multithreading on both Winform and WPF applications. If you’ve ever tried it, how often have you run into the following exception :

System.InvalidOperationException: Cross-thread operation not valid: Control ‘labelStatus’ accessed from a thread other than the thread it was created on.

It can be from something as simple as so in a Winform App :

private void buttonUpdate_Click(object sender, EventArgs e)
{
    Task.Run(() => UpdateStatus("Update"));
}

private void UpdateStatus(string text)
{
    try
    {
        labelStatus.Text = text;
    }catch(Exception ex)
    {
        MessageBox.Show(ex.ToString());
    }
}

Note that the whole try/catch with a MessageBox is just so that the exception is actually shown without the Task swallowing the exception. Otherwise in some cases we may not even see the exception at all, instead it just silently fails and we don’t see the label text update and wonder what the heck is going on.

The issue is quite simple. In both Winform and WPF, the controls can only be updated from the “UI Thread”. So any background thread (Whether a thread, task or background worker) needs to sort of negotiate the update back into main UI thread. For WinForms, we can use delegates with Invoke, and for WPF/XAML, we have to use the Dispatcher class. But both require us to write an ungodly amount of code just to do something as simple as update a label.

I would also note that sometimes you see people recommend adding the following line of code somewhere in your application :

CheckForIllegalCrossThreadCalls = false;

This is a terrible idea and you should never do it. This is basically hiding the error from you but the problem of two threads simultaneously trying to update/use a control still exists!

So how does PostSharp resolve this?

[Dispatched]
private void UpdateStatus(string text)
{
    try
    {
        labelStatus.Text = text;
    }catch(Exception ex)
    {
        MessageBox.Show(ex.ToString());
    }
}

With literally *one* attribute of course. You simply mark which methods need to be ran on the UI thread, and that’s it! And let me just say one thing, while yes at some point in your C# career you need to do a deep dive on delegates/actions and marshalling calls, I really wish I had this early on in my developer life so I didn’t have to spend hours upon hours writing boilerplate code just to update a label or change the color of a textbox!

Who Is This Library For?

I think if your code is kicking off tasks at any point (Especially if you are doing background work in a Winform/WPF environment), then I think giving PostSharp Threading a try is a no brainer. There is actually even more features in the library than I have listed here including a way to make objects immutable, freeze objects, and even be able to mark objects as unsafe for multithreading just to stop a future developer shooting themselves in the foot.

Give it a try and drop a comment below on how you got on.


This is a sponsored post however all opinions are mine and mine alone. 

Leave a Comment