While working on another post about implementing your own custom ILogger implementation, I came across a method on the ILogger interface that I had never actually used before. It looks something like  _logger.BeginScope("Message Here");  , Or correction, it didn’t just take a string, it could take any type and use this for a “scope”. It intrigued me, so I got playing.

The Basics Of Scopes In ILogger

I’m not really sure on the use of “scope” as Microsoft has used it here. Scope when talking about logging seems to imply either the logging level, or more likely which classes and use a particular logger. But scope as Microsoft has defined it in ILogger is actually to do with adding extra messaging onto a log entry. It’s somewhat metadata-ish, but it tends to lend itself more like a stacktrace from within a single method.

To turn on scopes, first you need to actually be using a logger that implements them – not all do. And secondly you need to know if that particular logger has a setting to then turn them on if they are not already on by default. I know popular loggers like NLog and Serilog do use scopes, but for now we are just going to use the Console logger provided by Microsoft. To turn it on, when configuring our logger we just use the “IncludeScopes” flag.

First I’ll give a bit of code to look at as it’s probably easier than me rambling on trying to explain.

Now when we run this, we end up with a log like so :

So for this particular logger, what it does is it appends it before the logs. But it’s important to note that the scope messages are stored separately, it just so happens that this loggers only way of reporting is to push it all into text. You can see the actual code for the Console logger by Microsoft here : https://github.com/aspnet/Logging/blob/dev/src/Microsoft.Extensions.Logging.Console/ConsoleLogScope.cs. So we can see it basically appends it in a hierarchical fashion and doesn’t actually turn it into a string message until we actually log a message.

If for example you were using a Logger that wrote to a database, you could have a separate column for scope data rather than just appending it to the log message. Same thing if you were using a logging provider that allowed some sort of metadata field etc.

Note :  I know that the scope message is doubled up. This is because of the way the ConsoleLogger holds these messages. It actually holds them as a key value pair of string and object. The way it gets the key is by calling “ToString()” on the message… which basically calls ToString() on a string. It then writes out the Key followed by the Value, hence the doubling up. 

So It’s Basically Additional Metadata?

Yes and no. The most important thing about scopes is it’s hierarchy. Every library I’ve looked at that implements scopes implements them in a way that there is a very clear hierarchy to the messages. Plenty of libraries allow you to add additional information with an exception/error message, but through scopes we can determine where the general area of our code got up to and the data around it without having to define private variables to hold this information.

Using Non String Values

The interesting thing is that BeginScope actually accepts any object as a scope, but it’s up to the actual Logger to decide how these logs are written. So for example, if I pass a Dictionary of values to a BeginScope statement, some libraries may just ToString() anything that’s given to them, others may check the type and decide that it’s better represented in a different format.

If we take a look at something like NLog. We can see how they handle scopes here : https://github.com/NLog/NLog.Extensions.Logging/blob/master/src/NLog.Extensions.Logging/Logging/NLogLogger.cs#L269. Or if I take the code and paste it up :

We can see it checks if the state message is a type of IEnumerable<KeyValuePair<string, object>> and then throws it off to another method (Which basically splits open the list and creates a nicer looking message). If it’s anything else (Likely a string), we just push the message as is. There isn’t really any defined way provided by Microsoft on how different types should be handled, it’s purely up to the library author.

How About XYZ Logging Library

Because logging scopes are pretty loosely defined within the ASP.net Core framework, it pays to always test how scopes are handled within the particular logging library you are using. There is no way for certain to say that scopes are even supported, or that they will be output in a way that makes any sense. It’s also important to look at the library code or documentation, and see if there is any special handling for scope types that you can take advantage of.

There is a current proposal that’s getting traction (By some) to make it into C# 8. That’s “default interface methods”. Because at the time of writing, C# 8 is not out, nor has this new language feature even “definitely” made it into that release, this will be more about the proposal and my two cents on it rather than any specific tutorial on using them.

What Are Default Interface Methods?

Before I start, it’s probably worth reading the summary of the proposal on Github here : https://github.com/dotnet/csharplang/issues/288. It’s important to remember is that this is just a proposal so not everything described will actually be implemented (Or implemented in the way it’s initially being described). Take everything with a grain of salt.

The general idea is that an interface can provide a method body. So :

A class that implements this interface does not have to implement methods where a body has been provided. So for example :

The interesting thing is that the class itself does not have the ability to run methods that have been defined and implemented on an interface. So :

The general idea is that this will now allow you to do some sort of multiple inheritance of behaviour/methods (Which was previously unavailable in C#).

There are a few other things that then are required (Or need to become viable) when opening this up. For example allowing private level methods with a body inside an interface (To share code between default implementations).

Abstract Classes vs Default Interface Methods

While it does start to blur the lines a bit, there are still some pretty solid differences. The best quote I heard about it was :

Interfaces define behaviour, classes define state.

And that does make some sense. Interfaces still can’t define a constructor, so if you want to share constructor logic, you will need to use an abstract/base class. An interface also cannot define class level variables/fields.

Classes also have the ability to define accessibility of it’s members (For example making a method protected), whereas with an interface everything is public. Although part of the proposal is extending interfaces with things like the static keyword and protected, internal etc (I really don’t agree with this).

Because the methods themselves are only available when you cast back to the interface, I can’t really see it being a drop in replacement for abstract classes (yet), but it does blur the lines just enough to ask the question.

My Two Cents

This feels like one of those things that just “feels” wrong. And that’s always a hard place to start because it’s never going to be black and white. I feel like interfaces are a very “simple” concept in C# and this complicates things in ways which I don’t see a huge benefit. It reminds me a bit of the proposal of “Primary Constructors” that was going to make it into C# 6 (See more here : http://www.alteridem.net/2014/09/08/c-6-0-primary-constructors/). Thankfully that got dumped but it was bolting on a feature that I’m not sure anyone was really clamoring for.

But then again, there are some merits to the conversation. One “problem” with interfaces is that you have to implement every single member. This can at times lock down any ability to extend the interface because you will immediately blow up any class that has already inherited that interface (Or it means your class becomes littered with throw new NotImplementedException() ).

There’s even times when you implement an interface for the first time, and you have to pump out 20 or so method implementations that are almost boilerplate in content. A good example given in the proposal is that of IEnumerable. Each time you implement this you are required to get RSI on your fingers by implementing every detail. Where if there was default implementations, there might be only a couple of methods that you truly want to make your own, and the default implementations do the job just fine for everything else.

All in all, I feel like the proposal should be broken down and simplified. It’s almost like an American political bill in that there seems to be a lot in there that’s causing a bit of noise (allowing static, protected members on interfaces etc). It needs to be simplified down and just keep the conversation on method bodies in interfaces and see how it works out because it’s probably a conversation worth having.

What do you think? Drop a comment below.

While helping a new developer get started with ASP.net Core, they ran into an interesting exception :

InvalidOperationException: Cannot consume scoped service from singleton.

I found it interesting because it’s actually the Service DI of ASP.net Core trying to make sure you don’t trip yourself up. Although it’s not foolproof (They still give you enough rope to hang yourself), it’s actually trying to stop you making a classic DI scope mistake. I thought I would try and build up an example with code to first show them, then show you here! Of course, if you don’t care about any of that, there is a nice TL;DR right at the end of this post!

The Setup

So there is a little bit of code setup before we start explaining everything. The first thing we need is a “Child” service :

The reason we have a property here called “CreationCount” is because later on we are going to test if a service is being created on a page refresh, or it’s re-using existing instances (Don’t worry, it’ll make sense!)

Next we are going to create *two* parent classes. I went with a Mother and Father class, for no other reason that the names seemed to fit (They are both parent classes). They also will have a creation count, and reference the child service.

Finally, we should create a controller that simply outputs how many times the services have been created. This will help us later in understanding how our service collection is created and, in some circumstances, shared.

The Scoped Service Problem

The first thing we want to do, is add a few lines to the ConfigureServices method of our startup.cs. This first time around, all services will be singletons.

Now, let’s load our page and refresh if a few times and see what the output is. The results of 3 page refreshes look like so :

So this makes sense. A singleton is one instance per application, so no matter how many times we fresh the page, it’s never going to create a new instance.

Let’s change things up, let’s make everything transient.

And then we load it up 3 times.

It’s what we would expect. The “parent” classes go up by 1 each time. The child actually goes up by 2 because it’s being “requested” twice per page load. One for the “mother” and one for the “father”.

Let’s instead make everything scoped. Scoped means that a new instance will be created essentially per page load (Atleast that’s the “scope” within ASP.net Core).

And now let’s refresh out page 3 times.

OK this makes sense. Each page load, it creates a new instance for each. But unlike the transient example, our child creation only gets created once per page load even though two different services require it as a dependency.

Right, so we have everything as scoped. Let’s try something. Let’s say that we decide that our parents layer should be singleton. There could be a few reasons we decide to do this. There could be some sort of “cache” that we want to use. It might have to hold state etc. The one reason we don’t want to make it singleton is for performance. Very very rarely do I actually see any benefit from changing services to singletons. And the eventual screw ups because you are making things un-intuitively singletons is probably worse in the long run.

We change our ConfigureServices method to :

We hit run and….

So because our ChildService is scoped, but the FatherService is singleton, it’s not going to allow us to run. So why?

If we think back to how our creation counts worked. When we have a scoped instance, each time we load the page, a new instance of our ChildService is created and inserted in the parent service. Whereas when we do a singleton, it keeps the exact same instance (Including the same child services). When we make the parent service a singleton, that means that the child service is unable to be created per page load. ASP.net Core is essentially stopping us from falling in this trap of thinking that a child service would be created per page request, when in reality if the parent is a singleton it’s unable to be done. This is why the exception is thrown.

What’s the fix? Well either the parent needs to be scoped or transient scope. Or the child needs to be a singleton itself. The other option is to use a service locator pattern but this is not really recommended. Typically when you make a class/interface singleton, it’s done for a very good reason. If you don’t have a reason, make it transient.

The Singleton Transient Trap

The interesting thing about ASP.net Core catching you from making a mistake when a scoped instance is within a singleton, is that the same “problem” will arise from a transient within a singleton too.

So to refresh your memory. When we have all our services as transient :

We load the page 3 times, and we see that we create a new instance every single time the class is requested.

So let’s try something else. Let’s make the parent classes singleton, but leave the child as a transient.

So we load the page 3 times. And no exception but…

This is pretty close as having the child as scoped with the parents singletons. It’s still going to give us some unintended behaviour because the transient child is not going to be created “everytime” as we might first think. Sure, it will be created everytime it’s “requested”, but that will only be twice (Once for each of the parents).

I think the logic behind this not throwing an exception, but scoped will, is that transient is “everytime this service is requested, create a new instance”, so technically this is correct behaviour (Even though it’s likely to cause issues). Whereas a “scoped” instance in ASP.net Core is “a new instance per page request” which cannot be fulfilled when the parent is singleton.

Still, you should avoid this situation as much as possible. It’s highly unlikely that having a parent as a singleton and the child as transient is the behaviour you are looking for.


A Singleton cannot reference a Scoped instance.


Astute observers will note that .NET Core has been out for quite some time now, with no “GUI” option of building applications. This seems rather deliberate, with Microsoft instead intending to focus on web applications instead of desktop apps. This is also coupled with the fact that to create a cross platform GUI library is easier said then done. Creating something that looks good on Windows, Mac and Linux is no easy feat.

And that’s where Eto.Forms jumps in. It’s a GUI library that uses the native toolkit of each OS/Platform through a simple C# API. Depending on where it is run, it will use WPF, MonoMac or GTK#. It isn’t necessarily using .NET Core (Infact it definitely isn’t, it uses Mono on non Windows platforms), but because one of the main reasons to switch to Core is it’s ability to be cross platform and work on non-windows OS, I thought I would have a little play with it.

In this first post, we will just be working out how to do a simple test application. In future posts, we will work out how to run platform specific code and how better to deploy our application that doesn’t involve a tonne of dependencies to be installed on the machine (For example bundling mono with your application). I’ve spent quite a bit tinkering with this and so far, definitely so good.

Setup Within Visual Studio

When using Visual Studio, we want to take advantage of code templates so that our “base” solution is created for us.

Inside Visual Studio, go to Tools -> Extensions and Updates. Hit “Online” on the left hand side, then type in “Eto.Forms” in your search box.

Go ahead and install the Eto.Forms Visual Studio Addin. It’s highly likely after install you will also need to restart Visual Studio to complete the installation.

When you are back in Visual Studio, go ahead and create a new project. You should now have a new option to create an “ETO Forms Project”.

After selecting this, you are going to be given a few options on the project. The first is that do you want all 3 platforms to be within a single application, or a separate application for each. For the purpose of this demo we are going to go ahead and do 3 separate projects. The only reason we do this is so that building becomes easier to “see” because you have a different bin folder for each platform. As a single project, everything gets built into the same bin folder so it just looks a bit overwhelming.

Next you are given options on how you want to define your layout. And here’s where you might tune out. If you are expecting some sort of WYSIWYG designer, it ain’t happening. You can define everything with C# code (Sort of like how Winforms would generate a tonne of code in the background for you), you can use XAML, or you can use JSON. Me personally, I’m a WinForm kinda guy so I prefer defining the forms in C# code. So I’m going to go ahead and use that in this example. I can definitely see in the future someone building a tool ontop that simply generates the underlying code, but for now, you’ll have to live with designing by hand.

Now we should be left with 4 projects. The first being our “base” project. This defines the forms and shared code no matter which platform we actually end up running on. The other 3 projects are for Linux, Mac and Windows respectively. As I said earlier, if we select the option to have a “single” project, we actually end up with two projects. One that contains shared code, and one that actually launches the thing (But builds for all 3 platforms).

In each of these .csproj files, you will find a line like :

Which basically tells it which platform to build for. And again, you can combine them within the same project to build for two different platforms within the single project.

Something that I was super surprised about is that there is no mac requirement to do a full build of the mac application. I wasn’t really up to date with how apps within MacOS are typically built, but I thought if they were anything like iOS you would need a mac just to deploy the thing. No such requirement here (Although I suspect you will have  one anyway if you actually intend to develop mac applications).

One big thing. When I actually did all of this, there was a whole host of issues actually getting the project to run. Weirdly enough, updating Visual Studio itself was enough to get everything up and running (You need to do this *before* you create the project). So ensure you are on the latest version of VS. 

So how does it actually look when run? Let’s put Windows and Linux side by side.

Definitely some interesting things right off the bat. Mostly that on Linux, we don’t really use Menu Bars, instead they are similar to a Mac where along the top of the window you have your “File” option etc. Also that the sizing is vastly different between the two, both in height and width! The main point is, we managed to build a GUI application sharing the exact same code, and have it run on Windows and Linux! Cool!

One final thing, you can use Visual Studio to add new forms/components instead of typing them up manually. In reality, it doesn’t really do that much boiler plate for you, but it’s there if you need it. Just right click your shared code project, and select Add New Item as you typically would. You should have a new option of adding an “Eto.Forms View”. From here you are given options on what you are trying to create (A panel, dialog or form), and how you want it to be designed (Code, XAML, Json etc).

Setup From A Terminal/Command Prompt

Alright, so you are one of “those” people that want to use a terminal/command prompt everywhere. Well that’s ok, Eto has you covered! In reality because there is no WYSIWYG designer, there isn’t any great need for Visual Studio if you prefer to use VS Code. Realistically the only difference is how you create the initial project template (Since Visual Studio extensions are going to be off the cards).

The first thing we are going to do is install the Eto.Forms templates. To do that, we run a specific dotnet command in our terminal/command prompt.

To see all available options to us, we can run :

But we are pros and know exactly what we want. So we run :

What this does is create a solution in the current directory (Named after the folder name). It creates a solution for us because of the -sln flag. The -m flag tells us which “mode” we want to use for designing our forms (In our case we want to use code). And finally -s says to create separate projects for each platform. And that’s it! We have a fully templated out project that is essentially identical to the one we created in Visual Studio.

One more thing, you should also be able to create “files” in a similar way to creating them in Visual Studio. So it does a bit of boiler plate legwork for you. To see all available options to you, you can run :

Again, it’s basically creating a class with the right inheritance and that’s about it. But it can be handy.


Ever since I posted a quick guide to sending email via Mailkit in .NET Core 2, I have been inundated with comments and emails asking about specific exception messages that either Mailkit or the underlying .NET Core. Rather than replying to each individual email, I would try and collate all the errors with sending emails here. It won’t be pretty to read, but hopefully if you’ve landed here from a search engine, this post will be able to resolve your issue.

I should also note that the solution to our “errors” is usually going to be a change in the way we use the library MailKit. If you are using your own mail library (Or worse, trying to open the TCP ports yourself and manage the connections), you will need to sort of “translate” the fix to be applicable within your code.

Let’s go!

Default Ports

Before we go any further, it’s worth noting the “default” ports that SMTP is likely to use, and the settings that go along with them.

Port 25
25 is typically used a clear text SMTP port. Usually when you have SMTP running on port 25, there is no SSL and no TLS. So set your email client accordingly.

Port 465
465 is usually used for SMTP over SSL, so you will need to have settings that reflect that. However it does not use TLS.

Port 587
587 is usually used for SMTP when using TLS. TLS is not the same as SSL. Typically anything where you need to set SSL to be on or off should be set to off. This includes any setting that goes along the lines of “SSLOnConnect”. It doesn’t mean TLS is less secure, it’s just a different way of creating the connection. Instead of SSL being used straight away, the secure connection is “negotiated” after connection.

Settings that are relevant to port 587 usually go along the lines of “StartTLS” or simply “TLS”. Something like Mailkit usually handles TLS for you so you shouldn’t need to do anything extra.

Incorrect SMTP Host

System.Net.Sockets.SocketException: No such host is known

This one is an easy fix. It means that you have used an incorrect SMTP hostname (Usually just a typo). So for example using smp.gmail.com instead of smtp.gmail.com. It should be noted that it isn’t indicative that the port is wrong, only the host. See the next error for port issues!

Incorrect SMTP Port

System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: A socket operation was attempted to an unreachable network [IpAddress]:[Port]

Another rather simple one to fix. This message almost always indicates you are connecting to a port that isn’t open. It’s important to distinguish that it’s not that you are connecting with SSL to a non SSL port or anything along those lines, it’s just an out and out completely wrong port.

It’s important that when you log the exception message of this error, you check what port it actually says in the log. I can’t tell you how many times I “thought” I was using a particular port, but as it turned out, somewhere in the code it was hardcoded to use a different port.

Another thing to note is that this error will manifest itsef as a “timeout”. So if you try and send an email with an incorrect port, it might take about 30 seconds for the exception to actually surface. This is important to note if your system slows to a crawl as it gives you a hint of what the issue could be even before checking logs.

Forcing Non-SSL On An SSL Port

MailKit.Net.Smtp.SmtpProtocolException: The SMTP server has unexpectedly disconnected.

I want to point out this is a MailKit exception, not one from .NET. In my experience this exception usually pops up when you set Mailkit to not use SSL, but the port itself requires SSL.

As an example, when connecting to Gmail, the port 465 is used for SSL connections. If I try and connect to Gmail with the following Mailkit code :

That last parameter that’s set to false is telling Mailkit to not use SSL. And what do you know, we get the above exception. The easy fix is obviously to change this to “true”.

Alternatively, you might be getting confused about TLS vs SSL. If the port says to connect using SSL, then you should not “force” any TLS connection.

Connecting Using SSL On A TLS Port (Sometimes)

MailKit.Security.SslHandshakeException: An error occurred while attempting to establish an SSL or TLS connection.

Another Mailkit specific exception. This one can be a bit confusing, especially because the full error log from MailKit talks about certificate issues (Which it totally could be!), but typically when I see people getting this one, it’s because they are trying to use SSL when the port they are connecting to is for TLS. SSL != TLS.

So instead of trying to connect using MailKit by forcing SSL, just connect without passing in an SSL parameter.

I should note that you will also get this exception in Mailkit if you try and force the SSLOnConnect option on a TLS Port. So for example, the following will not work :

Again, sometimes this is because the SSL connection truly is bogus (Either because the certificate is bad, expired etc), or because you are connecting to a port with no security whatsoever. But because of the rise of TLS, I’m going to say the most likely scenario is that you are trying to force SSL on a TLS port.

Another error that you can sometimes get with a similar setup is :

Handshake failed due to an unexpected packet format

This is an exception that has a whole range of causes, but the most common is forcing an SSL connection on a TLS port. If your SMTP port supports “TLS”, then do not set SSL to true.

Forcing No TLS On A TLS Port

System.NotSupportedException: The SMTP server does not support authentication.

This one is a little weird because you basically have to go right out of your way to get this exception. The only way I have seen this exception come up when you are telling Mailkit to go out of it’s way to not respect any TLS messages it gets and try and ram authentication straight away.

So for example, the following code would cause an issue :

Incorrect Username And/Or Password

MailKit.Security.AuthenticationException: AuthenticationInvalidCredentials: 5.7.8 Username and Password not accepted.

Goes without saying, you have the wrong username and password. This is not going to be any connection issues. You definitely have the right host and port, and the right SSL/TLS settings, but your username/password combination is wrong.

Other Errors?

Have something else going haywire? Drop a comment below!

Many years back, I actually started programming so that I could cheat at an online web browser based game (I know, I know). I started in Pascal/Delphi, but quickly moved onto C#. Back then, I was using things like HttpWebRequest/Response to load the webpage, and using raw regex to parse it. Looking back, some of my earlier apps I wrote were just a complete mess. Things have changed, and it’s now easier than ever to automate web requests using C#/.NET Core. Let’s get started.

Loading The Webpage

Let’s start by creating a simple console application. Immediately, let’s go ahead and change this to an async entry point. So from this :

We just make it async and return Task :

However if you’re using an older version of C#, you will get an error message that there is no valid entry point. Something like :

In which case you need to do something a little more complex. You need to turn your main method to look like the following :

What this essentially does is allow you to use async methods in a “sync” application. I’ll be using a proper async entry point, but if you want to go along with this second method, then you need to put the rest of this tutorial into your “MainAsync”.

For this guide I’m actually going to be loading my local news site, and pulling out the “Headline” article title and showing it on a console screen. The site itself is http://www.nzherald.co.nz/.

Loading the webpage is actually rather easy. I jam the following into my main method :

And run it to check the results :

Great! So we loaded the page via code, and we just spurted it all out into the window. Not that interesting so far, we probably need a way to extract some content from the page.

Extracting HTML Content

As I said earlier, in the early days I just used raw regex and with a bit of trial and error got things working. But there are much much easier ways to read content these days. One of these is to use HtmlAgilityPack (HAP). HAP has been around for years, which means that while it’s battle tested and is pretty bug free, it’s usage can be a bit “archaic”. Let’s give it a go.

To use HAP, we first need to install the nuget package. Run the following from your Package Manager Console :

The first thing we need to do is feed our HTML into an HtmlDocument. We do that like so :

Now given I wanted to try and read the title of the “headline” article on NZHerald, how do we go about finding that piece of text?

HAP (And most other parsers) work using a thing called “XPath“. It’s this sort of query language that tells us how we should traverse through an XML document. Because HTML is pretty close to XML, it’s also pretty handy here. Infact, a big part of what HAP does is try and clean up the HTML so it’s able to be parsed more or less as an HTML document.

Now the thing with XPath within an HTML document is that it can be a bit trial and error. Your starting point should be pulling up your browsers dev tools, and just inspecting the divs/headers/containers around the data you want to extract.

In my case, the first thing I notice is that the headline article is contained within a div that has a class of pb-f-homepage-hero . As I scroll down the page, this class is actually used on a few different other containers, but the first one in the HTML document is what I wanted. Ontop of that, our text is then inside a nice h3  tag within this container.

So given these facts, (And that I know a bit of XPath off the top of my head), the XPath we are going to use is :

Let’s break it down a bit.

//  says that we can start anywhere within the document (Not just the first line).
div  says that we are looking for a div that…
[contains(@class,'pb-f-homepage-hero')]  contains a “class” attribute that has the text ‘pb-f-homepage-hero’ somewhere within it.
//h3  says that within this div element, somewhere in there (Doesn’t have to be the first child, could be deeper), there will be an h3
()[1]  says just get me the first one.

We can actually test this by going here : http://videlibri.sourceforge.net/cgi-bin/xidelcgi. We can paste our HTML into the first window, our xpath into the second, and the results of our expression will come up below. There are a few XPath testers out there, but I like this one in particular because it allows for some pretty busted up HTML (Others try and validate that it’s valid XML).

So we have our XPath, how about we actually use it?

So really we can paste our entire XPath into a “SelectSingleNode” statement, and we use the “InnerText” property to get just the textual content within that node (Rather than the HTML).

If we let this run :

Done! We have successfully grabbed the headline news title! (Which is currently that a political party within NZ has nominated Simon Bridges as it’s new leader).

In a previous post we talked about the new in  keyword in C# 7.2, which got me going through the new features in C# and how they all fit together. One of these new features is the private protected  access modifier. Now you’ve heard of marking properties/variables with private , and you’ve heard of them being marked protected , but have you heard of them being marked private protected ?! Next level! Let’s take a look.

How Protected Currently Works

First, let’s look at a basic example of how the protected keyword can be used. A member that is marked with only protected, can only be accessed within derived classes, but the derived class can be in any assembly. So for example, in “assembly1” we have the following code :

In this example, our DerivedClass has no issues accessing the BaseMember because it has inherited the base class. However our ServiceClass throws a compile time error that it cannot access the member from an instantiated class. Makes sense.

In a second assembly, for arguments sake let’s call it “assembly2”, we create the create the following class :

This essentially has the same effect as what we did in assembly1. Our derived class still works even though it’s in another assembly, because protected doesn’t care about where the derived class is actually located, only that it’s inheriting from the base class. And our service is still broken as we would expected.

So that’s protected done.

How Internal Protected Works

So a little known access modifier that has been in the C# language for a while is internal protected . Now at first glance you might think that it’s an “and” type deal. Where the member is internal and protected e.g. It can only be accessed within the same assembly. But actually it’s an “or” situation. You can access the member if you are inside a derived class or you are within the same assembly.

Going back to our example. Inside assembly1 if we go ahead and change the BaseMember to be internal protected :

We can now see that inside our ServiceClass, when we instantiate the instance of the BaseClass, we can now access the BaseMember. And our DerivedClass is still trucking along as per normal. This is where the “or” comes in.

If we head to our second assembly (assembly2). And recheck our code there :

Nothing has changed. Our DerivedClass can still access the BaseMember, but our ServiceClass that is instantiating up an instance cannot.

How Private Protected Works

Now we are onto the new stuff! Before we get into explanations, let’s keep running with our example and change our BaseClass to use private protected  and see how we go.

First assembly1 :

So we have changed our BaseMember to being private protected. Our DerivedClass still doesn’t have an issue, but our ServiceClass is throwing up an error at compile time again. So far, it’s working just like how protected has always worked.

Onto assembly2 :

What have we here!? So now in our second assembly, even our DerivedClass is now throwing up an error even though it is inheriting the BaseClass. Our ServiceClass is still in strife, so nothing has changed there.

From this we can see that using private protected  means basically what I originally thought internal protected  meant. It’s protected so that only derived classes can use it, but it’s also only for use within the same assembly.

If you haven’t already gathered, I’m not a big fan of the naming. The internal  keyword already sparks off a “only within this assembly” lightbulb in a developers head, so to then use the private  keyword to limit a member across assemblies seems like an odd choice. But internal protected  was already taken, so it’s a bit of a rock and a hard place.

What do you think?

In Chart Form

In case you are still a little confused (And because things are always better in chart form), here’s a handy chart to make things just that little bit easier.

Derived Class/Same Assembly Instance/Same Assembly Derived Class/Different Assembly Instance/Different Assembly
protected Yes No Yes No
internal protected Yes Yes Yes No
private protected Yes No No No


I’ve recently been playing around with all of the new features packaged into C# 7.2. One such feature that piqued my interest because of it’s simplicity was the “in” keyword. It’s one of those things that you can get away with never using in your day to day work, but makes complete sense when looking at language design from a high level.

In simple terms, the in  keyword specifies that you are passing a parameter by reference, but also that you will not modify the value inside a method. For example :

In this example we pass in two parameters essentially by reference, but we also specify that they will not be modified within the method. If we try and do something like the following :

We get a compile time error:

Accessing C# 7.2 Features

I think I should just quickly note how to actually access C# 7.2 features as they may not be immediately available on your machine. The first step is to ensure that Visual Studio is up to date on your machine. If you are able to compile C# 7.2 code, but intellisense is acting up and not recognizing new features, 99% of the time you just need to update Visual Studio.

Once updated, inside the csproj of your project, add in the following :

And you are done!


When passing value types into a method, normally this would be copied to a new memory location and you would have a clone of the value passed into a method. When using the in  keyword, you will be passing the same reference into a method instead of having to create a copy. While the performance benefit may be small in simple business applications, in a tight loop this could easily add up.

But just how much performance gain are we going to see? I could take some really smart people’s word for it, or I could do a little bit of benchmarking. I’m going to use BenchmarkDotNet (Guide here) to compare performance when passing a value type into a method normally, or as an in  parameter.

The benchmarking code is :

And the results :

We can definitely see the speed difference here. This makes sense because really all we are doing is passing in a variable and doing nothing else. It has to be said that I can’t see the in  keyword being used to optimize code in everyday business applications, but there is definitely something there for time critical applications such as large scale number crunchers.

Explicit In Design

While the performance benefits are OK something else that comes to mind is that when you use in  is that you are being explicit in your design. By that I mean that you are laying out exactly how you intend the application to function. “If I pass this a variable into this method, I don’t expect the variable to change”. I can see this being a bigger benefit to large business applications than any small performance gain.

A way to look at it is how we use things like private  and readonly . Our code will generally work if we just make everything public  and move on, but it’s not seen as “good” programming habits. We use things like readonly  to explicitly say how we expect things to run (We don’t expect this variable to be modified outside of the constructor etc). And I can definitely see in  being used in a similar sort of way.

Comparisons To “ref” (and “out”)

A comparison could be made to the ref keyword in C# (And possibly to a lesser extend the out keyword). The main differences are :

in  – Passes a variable in to a method by reference. Cannot be set inside the method.
ref  – Passes a variable into a method by reference. Can be set/changed inside the method.
out  – Only used for output from a method. Can (and must) be set inside the method.

So it certainly looks like the ref keyword is almost the same as in, except that it allows a variable to change it’s value. But to check that, let’s run our performance test from earlier but instead add in a ref scenario.

So it’s close when it comes to performance benchmarking. However again, in  is still more explicit than ref because it tells the developer that while it’s allowing a variable to be passed in by reference, it’s not going to change the value at all.

Important Performance Notes For In

While writing the performance tests for this post, I kept running into instances where using in  gave absolutely no performance benefit whatsoever compared to passing by value. I was pulling my hair out trying to understand exactly what was going on.

It wasn’t until I took a step back and thought about how in  could work under the hood, coupled with a stackoverflow question or two later that I had the nut cracked. Consider the following code :

What will the output be? If you guessed 1. You would be correct! So what’s going on here? We definitely told our struct to set it’s value to 1, then we passed it by reference via the in keyword to another method, and that told the struct to update it’s value to 5. And everything compiled and ran happily so we should be good right? We should see the output as 5 surely?

The problem is, C# has no way of knowing when it calls a method (or a getter) on a struct whether that will also modify the values/state of it. What it instead does is create what’s called a “defensive copy”. When you run a method/getter on a struct, it creates a clone of the struct that was passed in and runs the method instead on the clone. This means that the original copy stays exactly the same as it was passed in, and the caller can still count on the value it passed in not being modified.

Now where this creates a bit of a jam is that if we are cloning the struct (Even as a defensive copy), then the performance gain is lost. We still get the design time niceness of knowing that what we pass in won’t be modified, but at the end of the day we may aswell passed in by value when it comes to performance. You’ll see in my tests, I used plain old variables to avoid this issue. If you are not using structs at all and instead using plain value types, you avoid this issue altogether.

To me I think this could crop up in the future as a bit of a problem. A method may inadvertently run a method that modifies the structs state, and now it’s running off “different” data than what the caller is expecting. I also question how this will work in multi-threaded scenarios. What if the caller goes away and modified the struct, expecting the method to get the updated value, but it’s created a defensive clone? Plenty to ponder (And code out to test in the future).


So will there be a huge awakening of using the in keyword in C#? I’m not sure. When Expression-bodied Members (EBD) came along I didn’t think too much of them but they are used in almost every piece of C# code these days. But unlike EBD, the in  keyword doesn’t save typing or having to type cruft, so it could just be resigned to one of those “best practices” lists. Most of all I’m interested in how in  transforms over future versions of C# (Because I really think it has to).   What do you think?

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed

In our previous article in this series, we looked at how we might go about setting up a nuget server. With that all done and dusted, it’s on to actually creating the libraries ourselves. Through trial and error and a bit of playing around, this article will dive into how best to go about developing the packages themselves. It will include how we version packages, how we go about limiting dependencies, and in general a few best practices to play with.

Use .NET Standard

An obvious place to begin is what framework to target. In nearly all circumstances, you are going to want to target .NET Standard (See here to know what .NET Standard actually is). This will allow you to target .NET Core, Full Framework and other smaller runtimes like UWP or Mono.

The only exception to the rule is going to come when you are building something especially specific for those runtimes. For example a windows form library that will have to target full framework.

Limit Dependencies

Your libraries should be built in such a way that someone consuming the library can use as little or as much as they want. Often this shows up when you add in dependencies to your library that only a select people need to use. Remember that regardless of whether someone uses that feature or not, they will need to drag that dependency along with them (And it may also cause versioning issues on top of that).

Let’s look at a real world example. Let’s say I’m building a logging library. Somewhere along the way I think that it would be handy to tie everything up using Autofac dependency injection as that’s what I mostly use in my projects. The question becomes, do I add Autofac to the main library? If I do this, anyone who wants to use my library needs to either also use Autofac, or just have a dependency against it even though they don’t use the feature. Annoying!

The solution is to create a second project within the solution that holds all our Autofac code and generate a secondary package from this project. It would end up looking something like this :

Doing this means that anyone who wants to also use Autofac can reference the secondary package (Which also has a dependency on the core package), and anyone who wants to roll their own dependency injection (Or none at all) doesn’t need to reference it at all.

I used to think it was annoying when you searched for a nuget package and found all these tiny granular packages. It did look confusing. But any trip to DLL Hell will make you thankful that you can pick and choose what you reference.

Semantic Versioning

It’s worth noting that while many Microsoft products use a 4 point versioning system (<major>.<minor>.<build>.<revision>), Nuget packages work off whats called “Semantic Versioning” or SemVer for short. It’s a 3 point versioning system that looks like <major>.<minor>.<patch>. And it’s actually really simple and makes sense when you look at how version points are incremented.

<major> is updated when a new feature is released that breaks backwards compatibility. For example if someone upgrades from version 1.2.0 to 2.0.0, they will know that their code very likely will have to change to accommodate the upgrade.

<minor> is updated when a new feature is released that is backwards compatible. A user upgrade a minor version should not have to touch their existing code for things to continue working as per normal.

<patch> is updated for bug fixes. Bug fixes should always be backwards compatible. If they aren’t able to be, then you will be required to update the major version number instead.

We’ll take a look at how we set this in the next section, but it’s also important to point out that there will essentially be 3 version numbers to set. I found it’s easier to just set them all to exactly the same value unless there is a really specific reason to not do so. This means that for things like the “Assembly Version” where it uses the Microsoft 4 point system, the last point is always just zero.

Package Information

In Visual Studio if you right click on a project and select properties, then select “Package” on the left hand menu. You will be given a list of properties that your package will take on. These include versions, the company name, authors and descriptions.

You can set them here, but they actually then get written to the csproj. I personally found it easier to edit the project file directly. It ends up looking a bit like this :

If you edit the csproj directly, then head back to Visual Studio properties pane to take a look, you’ll see that they have now been updated.

The most important thing here is going to be all the different versions seen here. Again, it’s best if these all match with “Version” being a SemVer with 3 points of versioning.

For a full list of what can be set here, check the official MS Documentation for csproj here.

Shared Package Information/Version Bumping

If we use the example solution above in the section for “Limit Dependencies” where we have multiple projects that we want to publish nuget packages for. Then we will likely want to be able to share various metadata like Company, Copyright etc among all projects from a single source file rather than having to update each one by one. This might extend to version bumping too, where for simplicity sake when we release a new version of any package in our solution, we want to bump the version of everything.

My initial thought was to use a linked “AssemblyInfo.cs” file. It’s where you see things like this :

Look familar? The only issue was that there seemed to be quite a few properties that couldn’t be set using this, for example the “Author” metadata. And secondly when publishing a .NET Standard nuget package, it seemed to completely ignore all “version” attributes in the AssemblyInfo.cs. Meaning that every package was labelled as version 1.0.0

By chance I had heard about a feature of “importing” shared csproj fields from an external file. And as it turned out, there was a way to place a file in the “root” directory of your solution, and have all csproj files automatically read from this.

The first step is to create a file called “Directory.Build.props” in the root of your solution. Note that the name actually is “Directory”, do not replace this for the actual directory name (For some stupid reason I thought this was the case).

Inside this file, add in the following :

And by magic, all projects underneath this directory will overwrite their metadata with the data from this file. Using this, we can bump versions or change the copyright notice of all child projects without having to go one by one. And ontop of that, it still allows us to put a description in each individual csproj too!


The hardest part about all of this was probably the shared assembly info and package information. It took me quite a bit of trial and error with the AssemblyInfo.cs file to work out it wasn’t going to happen, and surprisingly there was very little documentation about using a .props file to solve the issue.

The final article in the series will look into how we package and publish to our nuget feed. Check it out here!

This article is part of a series on the OWASP Top 10 for ASP.net Core. See below for links to other articles in the series.

A1 – SQL Injection A6 – Sensitive Data Exposure (Coming Soon)
A2 – Broken Authentication and Session Management A7 – Insufficient Attack Protection (Coming Soon)
A3 – Cross-Site Scripting (XSS) A8 – Cross-Site Request Forgery (Coming Soon)
A4 – Broken Access Control A9 – Using Components with Known Vulnerabilities (Coming Soon)
A5 – Security Misconfiguration (Coming Soon) A10 – Underprotected APIs (Coming Soon)

Broken Access Control is a new entry into the OWASP Top 10. In previous years there were concepts called “Insecure Direct Object References” and “Missing Function Level Access Controls” which have sort of been bundled all together with a couple more additions. This entry reminds me quite a bit of Broken Authentication because it’s very broad and isn’t that specific. It’s more like a mindset with examples given on things you should look out for rather than a very particular attack vector like SQL Injection. None the less, we’ll go through a couple of examples given to us by OWASP and see how they work inside .NET Core.

What Is Broken Access Control

Broken Access Control refers to the ability for an end user, whether through tampering of a URL, cookie, token, or contents of a page, to essentially access data that they shouldn’t have access to. This may be a user being able to access somebody elses data, or worse, the ability for a regular user to elevate their permissions to an admin or super user.

The most common vulnerability that I’ve seen is code that verifies that a user is logged in, but not that they are allowed to access a particular piece of data. For example, given a URL like www.mysite.com/orders/orderid=900  what would happen if we changed that orderid by 1? What if a user tried to access  www.mysite.com/orders/orderid=901. There will likely be code to check that a user is logged in (Probably a simple Authorize attribute in .NET Core). But how about specific code to make sure that the order actually belongs to that particular customer? It’s issues like this that make up the bulk of the Broken Access Control.

Where I start to veer away from the official definition from OWASP is that the scope starts exploding after this. I think when you try and jam too much under a single security headline, it starts losing it’s meaning. SQL Injection for example is very focused and has very specific attack vectors. Access Control is much more broader subject. For example, the following is defined as “Broken Access Control” by OWASP :

  • Misconfigured or too broad CORS configuration
  • Web server directory listing/browsing
  • Backups/Source control (.git/.svn) files present in web roots
  • Rate limiting of APIs
  • JWT Tokens not being invalidated on logout

And the list goes on. Essentially if you ask yourself “Should a web user be able to access this data in this way”, and the answer is “no”, then that’s Broken Access Control in a nutshell.

Insecure Direct Object References

As we’ve already seen, this was probably the grandfather of Broken Access Control in the OWASP Top 10. Direct object references are id’s or reference variables that are able to be changed by an end user, and they can then retrieve records that they should not be privy to.

As an example, let’s say we have the following action in a controller :

Now this is authorized somewhat as seen by the authorize header. This means a completely anonymous user can’t hit this endpoint, but it doesn’t stop one logged in user accessing a different user’s orders. Any id passed into this endpoint will return an order object no questions asked.

Let’s modify it a bit.

This will require a bit of imagination, but hopefully not too much!

Imagine that when we log in our user, we also store their customerid in a claims. That means we forever have access to exactly who is logged in when they try and make requests, not just that they are loggedin in general. Next, on each Order we store the CustomerId of exactly who the order belongs to. Finally, it means that when someone tries to load an order, we get the Id of the customer logged in, and compare that to the CustomerId of the actual order. If they don’t match, we reject the request. Perfect!

An important thing to note is that URL’s aren’t the only place this can happen. I’ve seen hidden fields, cookies, and javascript variables all be susceptible to this kind of abuse. The rule should always be that if it’s in the browser, a user can modify it. Server side validation and authorization is king.

One final thing to mention when it comes to direct object references is that “Security through obscurity” is not a solution. Security through obscurity refers to a system being secure because of some “secret design” or hidden implementation being somewhat “unguessable” by an end user. In this example, if we change our initial get order method to :

So now our OrderId is a Guid. I’ve met many developers that think this is now secure. How can someone guess a Guid? It is not feasible to enumerate all guids against a web endpoint (If it was feasible than we would have many more clashes), and this much is true. But what  if someone did get a hold of that guid from a customer? I’ve seen things like a customer sending a screenshot of their browser to someone else, and their Order Guid being in plain sight. You can’t predict how this Guid might leak in the future, malicious or stupidness. That’s why again, server side authorization is always king.

CORS Misconfiguration

CORS refers to limiting what website can place javascript on their page to then call your API. We have a great tutorial on how to use CORS in ASP.net Core here. In terms of OWASP, the issue with CORS is that it’s all too easy to just open up your website to all requests and call it a day. For example, your configuration may look like the following :

Here you are just saying let anything through, I don’t really care. The security problem becomes extremely apparent if your API uses cookies as an authentication mechanism. A malicious website can make a users browser make Ajax calls to the API, the cookies will be sent along with the request, and sensitive data will be leaked. It’s that easy.

In simple terms, unless your API is meant to be accessed by the wider public (e.g. the data it is exposing is completely public), you should never just allow all origins.

In ASP.net Core, this means specifying which websites are allowed to make Ajax calls.

As with all security configurations, the minimum amount of access available is always the best setting.

Directory Traversal and Dangerous Files

I lump these together because they should be no-brainers, but should always be on your checklist when deploying a site for the first time. The official line in the OWASP handbook when it comes to these is :

Seems pretty simple right? And that’s because it is.

The first is that I’ve actually never had a reason to allow directory traversal on a website. That is, if I have a URL like www.mysite.com/images/image1.jpg , a user can’t simply go to www.mysite.com/images/ and view all images in that directory. It may seem harmless at first, like who cares if a user can see all the images on my website. But then again, why give a user the ability to enumerate through all files on your system? It makes no sense when you think about it.

The second part is that metadata type files, or development only files should not be available on a web server. Some are obviously downright dangerous (Site backups could leak passwords for databases or third party integrations), and other development files could be harmless. Again it makes zero sense to have these on a web server at all as a user will never have a need to access them, so don’t do it!


While I’ve given some particular examples in this post, Broken Access Control really boils down to one thing, If a user can access something they shouldn’t, then that’s broken access control. While OWASP hasn’t put out too much info on this subject yet, a good place to start is the official OWASP document for 2017 (Available here). Within .NET Core, there isn’t any “inherit” protection against this sort of thing, and it really comes down to always enforcing access control at a server level.

Next post we’ll be looking at a completely new entry to the OWASP Top 10 for 2017, XML External Entities (XXE).