While working on another post about implementing your own custom ILogger implementation, I came across a method on the ILogger interface that I had never actually used before. It looks something like  _logger.BeginScope("Message Here");  , Or correction, it didn’t just take a string, it could take any type and use this for a “scope”. It intrigued me, so I got playing.

The Basics Of Scopes In ILogger

I’m not really sure on the use of “scope” as Microsoft has used it here. Scope when talking about logging seems to imply either the logging level, or more likely which classes and use a particular logger. But scope as Microsoft has defined it in ILogger is actually to do with adding extra messaging onto a log entry. It’s somewhat metadata-ish, but it tends to lend itself more like a stacktrace from within a single method.

To turn on scopes, first you need to actually be using a logger that implements them – not all do. And secondly you need to know if that particular logger has a setting to then turn them on if they are not already on by default. I know popular loggers like NLog and Serilog do use scopes, but for now we are just going to use the Console logger provided by Microsoft. To turn it on, when configuring our logger we just use the “IncludeScopes” flag.

First I’ll give a bit of code to look at as it’s probably easier than me rambling on trying to explain.

Now when we run this, we end up with a log like so :

So for this particular logger, what it does is it appends it before the logs. But it’s important to note that the scope messages are stored separately, it just so happens that this loggers only way of reporting is to push it all into text. You can see the actual code for the Console logger by Microsoft here : https://github.com/aspnet/Logging/blob/dev/src/Microsoft.Extensions.Logging.Console/ConsoleLogScope.cs. So we can see it basically appends it in a hierarchical fashion and doesn’t actually turn it into a string message until we actually log a message.

If for example you were using a Logger that wrote to a database, you could have a separate column for scope data rather than just appending it to the log message. Same thing if you were using a logging provider that allowed some sort of metadata field etc.

Note :  I know that the scope message is doubled up. This is because of the way the ConsoleLogger holds these messages. It actually holds them as a key value pair of string and object. The way it gets the key is by calling “ToString()” on the message… which basically calls ToString() on a string. It then writes out the Key followed by the Value, hence the doubling up. 

So It’s Basically Additional Metadata?

Yes and no. The most important thing about scopes is it’s hierarchy. Every library I’ve looked at that implements scopes implements them in a way that there is a very clear hierarchy to the messages. Plenty of libraries allow you to add additional information with an exception/error message, but through scopes we can determine where the general area of our code got up to and the data around it without having to define private variables to hold this information.

Using Non String Values

The interesting thing is that BeginScope actually accepts any object as a scope, but it’s up to the actual Logger to decide how these logs are written. So for example, if I pass a Dictionary of values to a BeginScope statement, some libraries may just ToString() anything that’s given to them, others may check the type and decide that it’s better represented in a different format.

If we take a look at something like NLog. We can see how they handle scopes here : https://github.com/NLog/NLog.Extensions.Logging/blob/master/src/NLog.Extensions.Logging/Logging/NLogLogger.cs#L269. Or if I take the code and paste it up :

We can see it checks if the state message is a type of IEnumerable<KeyValuePair<string, object>> and then throws it off to another method (Which basically splits open the list and creates a nicer looking message). If it’s anything else (Likely a string), we just push the message as is. There isn’t really any defined way provided by Microsoft on how different types should be handled, it’s purely up to the library author.

How About XYZ Logging Library

Because logging scopes are pretty loosely defined within the ASP.net Core framework, it pays to always test how scopes are handled within the particular logging library you are using. There is no way for certain to say that scopes are even supported, or that they will be output in a way that makes any sense. It’s also important to look at the library code or documentation, and see if there is any special handling for scope types that you can take advantage of.

There is a current proposal that’s getting traction (By some) to make it into C# 8. That’s “default interface methods”. Because at the time of writing, C# 8 is not out, nor has this new language feature even “definitely” made it into that release, this will be more about the proposal and my two cents on it rather than any specific tutorial on using them.

What Are Default Interface Methods?

Before I start, it’s probably worth reading the summary of the proposal on Github here : https://github.com/dotnet/csharplang/issues/288. It’s important to remember is that this is just a proposal so not everything described will actually be implemented (Or implemented in the way it’s initially being described). Take everything with a grain of salt.

The general idea is that an interface can provide a method body. So :

A class that implements this interface does not have to implement methods where a body has been provided. So for example :

The interesting thing is that the class itself does not have the ability to run methods that have been defined and implemented on an interface. So :

The general idea is that this will now allow you to do some sort of multiple inheritance of behaviour/methods (Which was previously unavailable in C#).

There are a few other things that then are required (Or need to become viable) when opening this up. For example allowing private level methods with a body inside an interface (To share code between default implementations).

Abstract Classes vs Default Interface Methods

While it does start to blur the lines a bit, there are still some pretty solid differences. The best quote I heard about it was :

Interfaces define behaviour, classes define state.

And that does make some sense. Interfaces still can’t define a constructor, so if you want to share constructor logic, you will need to use an abstract/base class. An interface also cannot define class level variables/fields.

Classes also have the ability to define accessibility of it’s members (For example making a method protected), whereas with an interface everything is public. Although part of the proposal is extending interfaces with things like the static keyword and protected, internal etc (I really don’t agree with this).

Because the methods themselves are only available when you cast back to the interface, I can’t really see it being a drop in replacement for abstract classes (yet), but it does blur the lines just enough to ask the question.

My Two Cents

This feels like one of those things that just “feels” wrong. And that’s always a hard place to start because it’s never going to be black and white. I feel like interfaces are a very “simple” concept in C# and this complicates things in ways which I don’t see a huge benefit. It reminds me a bit of the proposal of “Primary Constructors” that was going to make it into C# 6 (See more here : http://www.alteridem.net/2014/09/08/c-6-0-primary-constructors/). Thankfully that got dumped but it was bolting on a feature that I’m not sure anyone was really clamoring for.

But then again, there are some merits to the conversation. One “problem” with interfaces is that you have to implement every single member. This can at times lock down any ability to extend the interface because you will immediately blow up any class that has already inherited that interface (Or it means your class becomes littered with throw new NotImplementedException() ).

There’s even times when you implement an interface for the first time, and you have to pump out 20 or so method implementations that are almost boilerplate in content. A good example given in the proposal is that of IEnumerable. Each time you implement this you are required to get RSI on your fingers by implementing every detail. Where if there was default implementations, there might be only a couple of methods that you truly want to make your own, and the default implementations do the job just fine for everything else.

All in all, I feel like the proposal should be broken down and simplified. It’s almost like an American political bill in that there seems to be a lot in there that’s causing a bit of noise (allowing static, protected members on interfaces etc). It needs to be simplified down and just keep the conversation on method bodies in interfaces and see how it works out because it’s probably a conversation worth having.

What do you think? Drop a comment below.

In a previous post I talked about building cross platform GUI applications using Eto.Forms. One thing that came up was that to run the applications on Linux, you needed to have Mono installed and to run mono with the executable as an input argument. That’s probably fine if you are building a web application and it’s going to be on limited servers, but if you are distributing a desktop application you may not want to have to guide a user into downloading and using mono themselves.

Luckily, there is actually a way of bundling mono with your application that removes the need for a user to have to download, install, and then run mono. This means you can distribute your application to a Linux machine without any worry about prerequisites.Now I know this isn’t really .NET Core, but one of the most important things about Core is the cross platform-ness. And since Core doesn’t have desktop application support, I may aswell write a little bit on how to make that happen using Mono.

Setup and Install

The craziest thing about looking up how to use Mkbundle on Windows is that there is very little information about it in the past 5 years. Everything I found was from 2013 or earlier, and it was very outdated. It often involved downloading Cygwin and fiddling with paths and environment variables until things just sort of fell into place. To be fair, there comes a time in every Windows developers life when they resign themselves to the fact they will have to install Cygwin for whatever flaky piece of software they need to use, but today is not that day!

Head over to the Mono website and download the latest version for your PC : https://www.mono-project.com/download/stable/

I personally went with the 32 bit, just because everything I read out there was using that version and I didn’t want to run into an issue that “needed” the 32 bit version after all.


This part is important so I’ve put it in bold and put it between horizontal rules just to make sure you read this part! 

After install comes an extremely important step. This might sound like a little side bar but please, if this is your first time trying mkbundle on Windows, you will thank me later. If you try and run MKBundle right away, you will get an error that likely looks like this :

ERROR: The SDK location does not contain a C:\Program Files (x86)\Mono/bin/mono runtime

I smacked my head against the wall for an age with this. Eventually I actually tracked down the source code for mkbundle on Github. And found this line here : https://github.com/mono/mono/blob/master/mcs/tools/mkbundle/mkbundle.cs#L536

What it’s trying to do is check that you have a *file* called “C:\Program Files (x86)\mono\bin\mono”. Now I bold that part about it being a file, because it’s not looking for a directory. On non Windows systems, files don’t need to have extensions (like .exe), but on Windows they typically do. So what we actually need to do is make sure that this “test” passes. And it’s simple. Go to your mono/bin folder. In there, you should find a mono.exe. Make a *copy* of this file, and remove the extension so it is simply called “mono”. And it should sit side by side with your existing exe.

Now this should satisfy the file check and mono should run fine. I have logged a ticket on Github with Mono around this issue here : https://github.com/mono/mono/issues/7731 . So if you have the same problem or you’re coming from Google after smacking your head against the desk repeatedly with this issue, jump on and add your 2 cents!


Fetching The Correct Mono Runtime

OK with that out of the way, after installing everything you should now have a “Mono Command Prompt” available to you on your machine. Just type “Mono” in your start menu and it should pop up!

This works essentially like a regular command prompt, with the Mono commands already built in.

Now the next step is a little tricky. It’s not like you can run mkbundle and suddenly you have an executable for every OS in existence. Instead you need to fetch the runtime for the particular OS you want your application to run on, and bundle it for that particular runtime. If we package an exe with mkbundle right now then the only people that can run that application are people with the same OS (In our case Windows). This is not as dumb as you might first think. If we did do this, it would mean we could distribute an application that could run on Windows without Mono (Obviously), but more importantly without .NET Framework. There is definitely times where this could come in handy, but for now, we want to build for Linux, so let’s do that.

There is supposed to be commands to fetch and download various runtimes to your machine right from the command prompt. Ofcourse, Mono being a flaky POS at times (Sorry, it’s getting irritating working through these issues), the command doesn’t work at all. Instead if we run the command that “should” fetch available runtimes we get :

System.Net.WebException: Error: TrustFailure (The authentication or decryption has failed.)

And if instead we try the command that supposedly will download the runtime we want (For example we know the runtime signature so just slam that in), we will get :

Failure to download the specified runtime from https://download.mono-project.com/runtimes/raw/

So, we have to do everything manually.

First go to this URL : https://download.mono-project.com/runtimes/raw/. You need download the runtime for the particular OS you want to compile to. Once downloaded, you need to extract this to a particular directory in your documents folder. If I downloaded mono 5.10.0 for Ubuntu, then my directory that I extract to should be :  C:\Users\myuser\Documents\.mono\targets\mono-5.10.0-ubuntu-16.04-x64/ .

Once this has been done, we should then be able to run a command within the mono command prompt :  mkbundle --local-targets . The output of this should be all “targets” we have available to us.

mkbundle Command

Navigate to your applications directory that you want to “bundle”. In my case I’ve created a simple “HelloWorldConsole” application that does nothing but print out “Hello Mono World”. Inside this directory I run the following command  mkbundle HelloWorldConsole.exe --simple -o HelloWorldBundleUbuntu --cross mono-5.10.0-ubuntu-16.04-x64  Where HelloWorldConsole.exe is my built console app, the -o flag is what I want the output filename to be, and the –cross flag tells us which runtime we want to compile for.

Looks good to me! But let’s test it. Just for comparison sake, I also run the following command :  mkbundle HelloWorldConsole.exe --simple -o HelloWorldBundleDefault . This will give us a bundled application but it will be using our default mono runtime (Which in this case is Windows). This will be important later for showing the differences.

Just quickly, I also want to show the size difference between our original application, and our bundled app.

 

So in terms of bundling, using Windows mono we add about 4mb. For the Ubuntu bundle, we added 8mb. Your guess is as good as mine when it comes to why the huge size difference, but the main thing is that adding 4 – 8mb actually isn’t that bad when you consider a user now doesn’t have to worry about download mono themselves. That’s actually a relatively small bundle when you think about other assets that may be going along with the app like sound, sprites, images etc.

Let’s go ahead and copy these two bundles to our Ubuntu machine for testing.

First we will take a look at our default bundle. Now this shouldn’t work at all because it’s been built for Windows. We even have a command that we can use to check the file type. So let’s try that.

So straight away it’s telling us this is for Windows. And when we run it…

Yep, so it ain’t happening. Let’s instead try our Ubuntu bundle. First the file command to see if it recognizes that it’s a different sort of application :

So even though under the hood it’s actually the exact same C# executable, it’s been compiled correctly for Ubuntu. And when we run it :

Yes! We did it! It only took me a day of headaches to work out every little issue with mkbundle to get a HelloWorld bundle working on Ubuntu! But we did it!

Final Notes

One final thing I want to say is Mono/mkbundle is pretty crap to work with on Windows. I know there is definitely going to be someone out there that hates me for saying that, but it truly is. Some of these issues that I ran into (Notably the fact it tries to check for an extension-less mono exe when running mkbundle) I can see stackoverflow questions from over a year ago having the exact same problem. That means that these problems are nothing new.

Even the documentation was rife with issues on Windows. Simply put, they didn’t work. In some ways I can understand that Mono is built for non Windows machines in mind, but it is shockingly bad for a product that Microsoft has now taken under it’s wing.

While helping a new developer get started with ASP.net Core, they ran into an interesting exception :

InvalidOperationException: Cannot consume scoped service from singleton.

I found it interesting because it’s actually the Service DI of ASP.net Core trying to make sure you don’t trip yourself up. Although it’s not foolproof (They still give you enough rope to hang yourself), it’s actually trying to stop you making a classic DI scope mistake. I thought I would try and build up an example with code to first show them, then show you here! Of course, if you don’t care about any of that, there is a nice TL;DR right at the end of this post!

The Setup

So there is a little bit of code setup before we start explaining everything. The first thing we need is a “Child” service :

The reason we have a property here called “CreationCount” is because later on we are going to test if a service is being created on a page refresh, or it’s re-using existing instances (Don’t worry, it’ll make sense!)

Next we are going to create *two* parent classes. I went with a Mother and Father class, for no other reason that the names seemed to fit (They are both parent classes). They also will have a creation count, and reference the child service.

Finally, we should create a controller that simply outputs how many times the services have been created. This will help us later in understanding how our service collection is created and, in some circumstances, shared.

The Scoped Service Problem

The first thing we want to do, is add a few lines to the ConfigureServices method of our startup.cs. This first time around, all services will be singletons.

Now, let’s load our page and refresh if a few times and see what the output is. The results of 3 page refreshes look like so :

So this makes sense. A singleton is one instance per application, so no matter how many times we fresh the page, it’s never going to create a new instance.

Let’s change things up, let’s make everything transient.

And then we load it up 3 times.

It’s what we would expect. The “parent” classes go up by 1 each time. The child actually goes up by 2 because it’s being “requested” twice per page load. One for the “mother” and one for the “father”.

Let’s instead make everything scoped. Scoped means that a new instance will be created essentially per page load (Atleast that’s the “scope” within ASP.net Core).

And now let’s refresh out page 3 times.

OK this makes sense. Each page load, it creates a new instance for each. But unlike the transient example, our child creation only gets created once per page load even though two different services require it as a dependency.

Right, so we have everything as scoped. Let’s try something. Let’s say that we decide that our parents layer should be singleton. There could be a few reasons we decide to do this. There could be some sort of “cache” that we want to use. It might have to hold state etc. The one reason we don’t want to make it singleton is for performance. Very very rarely do I actually see any benefit from changing services to singletons. And the eventual screw ups because you are making things un-intuitively singletons is probably worse in the long run.

We change our ConfigureServices method to :

We hit run and….

So because our ChildService is scoped, but the FatherService is singleton, it’s not going to allow us to run. So why?

If we think back to how our creation counts worked. When we have a scoped instance, each time we load the page, a new instance of our ChildService is created and inserted in the parent service. Whereas when we do a singleton, it keeps the exact same instance (Including the same child services). When we make the parent service a singleton, that means that the child service is unable to be created per page load. ASP.net Core is essentially stopping us from falling in this trap of thinking that a child service would be created per page request, when in reality if the parent is a singleton it’s unable to be done. This is why the exception is thrown.

What’s the fix? Well either the parent needs to be scoped or transient scope. Or the child needs to be a singleton itself. The other option is to use a service locator pattern but this is not really recommended. Typically when you make a class/interface singleton, it’s done for a very good reason. If you don’t have a reason, make it transient.

The Singleton Transient Trap

The interesting thing about ASP.net Core catching you from making a mistake when a scoped instance is within a singleton, is that the same “problem” will arise from a transient within a singleton too.

So to refresh your memory. When we have all our services as transient :

We load the page 3 times, and we see that we create a new instance every single time the class is requested.

So let’s try something else. Let’s make the parent classes singleton, but leave the child as a transient.

So we load the page 3 times. And no exception but…

This is pretty close as having the child as scoped with the parents singletons. It’s still going to give us some unintended behaviour because the transient child is not going to be created “everytime” as we might first think. Sure, it will be created everytime it’s “requested”, but that will only be twice (Once for each of the parents).

I think the logic behind this not throwing an exception, but scoped will, is that transient is “everytime this service is requested, create a new instance”, so technically this is correct behaviour (Even though it’s likely to cause issues). Whereas a “scoped” instance in ASP.net Core is “a new instance per page request” which cannot be fulfilled when the parent is singleton.

Still, you should avoid this situation as much as possible. It’s highly unlikely that having a parent as a singleton and the child as transient is the behaviour you are looking for.

TL;DR;

A Singleton cannot reference a Scoped instance.

 

Astute observers will note that .NET Core has been out for quite some time now, with no “GUI” option of building applications. This seems rather deliberate, with Microsoft instead intending to focus on web applications instead of desktop apps. This is also coupled with the fact that to create a cross platform GUI library is easier said then done. Creating something that looks good on Windows, Mac and Linux is no easy feat.

And that’s where Eto.Forms jumps in. It’s a GUI library that uses the native toolkit of each OS/Platform through a simple C# API. Depending on where it is run, it will use WPF, MonoMac or GTK#. It isn’t necessarily using .NET Core (Infact it definitely isn’t, it uses Mono on non Windows platforms), but because one of the main reasons to switch to Core is it’s ability to be cross platform and work on non-windows OS, I thought I would have a little play with it.

In this first post, we will just be working out how to do a simple test application. In future posts, we will work out how to run platform specific code and how better to deploy our application that doesn’t involve a tonne of dependencies to be installed on the machine (For example bundling mono with your application). I’ve spent quite a bit tinkering with this and so far, definitely so good.

Setup Within Visual Studio

When using Visual Studio, we want to take advantage of code templates so that our “base” solution is created for us.

Inside Visual Studio, go to Tools -> Extensions and Updates. Hit “Online” on the left hand side, then type in “Eto.Forms” in your search box.

Go ahead and install the Eto.Forms Visual Studio Addin. It’s highly likely after install you will also need to restart Visual Studio to complete the installation.

When you are back in Visual Studio, go ahead and create a new project. You should now have a new option to create an “ETO Forms Project”.

After selecting this, you are going to be given a few options on the project. The first is that do you want all 3 platforms to be within a single application, or a separate application for each. For the purpose of this demo we are going to go ahead and do 3 separate projects. The only reason we do this is so that building becomes easier to “see” because you have a different bin folder for each platform. As a single project, everything gets built into the same bin folder so it just looks a bit overwhelming.

Next you are given options on how you want to define your layout. And here’s where you might tune out. If you are expecting some sort of WYSIWYG designer, it ain’t happening. You can define everything with C# code (Sort of like how Winforms would generate a tonne of code in the background for you), you can use XAML, or you can use JSON. Me personally, I’m a WinForm kinda guy so I prefer defining the forms in C# code. So I’m going to go ahead and use that in this example. I can definitely see in the future someone building a tool ontop that simply generates the underlying code, but for now, you’ll have to live with designing by hand.

Now we should be left with 4 projects. The first being our “base” project. This defines the forms and shared code no matter which platform we actually end up running on. The other 3 projects are for Linux, Mac and Windows respectively. As I said earlier, if we select the option to have a “single” project, we actually end up with two projects. One that contains shared code, and one that actually launches the thing (But builds for all 3 platforms).

In each of these .csproj files, you will find a line like :

Which basically tells it which platform to build for. And again, you can combine them within the same project to build for two different platforms within the single project.

Something that I was super surprised about is that there is no mac requirement to do a full build of the mac application. I wasn’t really up to date with how apps within MacOS are typically built, but I thought if they were anything like iOS you would need a mac just to deploy the thing. No such requirement here (Although I suspect you will have  one anyway if you actually intend to develop mac applications).

One big thing. When I actually did all of this, there was a whole host of issues actually getting the project to run. Weirdly enough, updating Visual Studio itself was enough to get everything up and running (You need to do this *before* you create the project). So ensure you are on the latest version of VS. 

So how does it actually look when run? Let’s put Windows and Linux side by side.

Definitely some interesting things right off the bat. Mostly that on Linux, we don’t really use Menu Bars, instead they are similar to a Mac where along the top of the window you have your “File” option etc. Also that the sizing is vastly different between the two, both in height and width! The main point is, we managed to build a GUI application sharing the exact same code, and have it run on Windows and Linux! Cool!

One final thing, you can use Visual Studio to add new forms/components instead of typing them up manually. In reality, it doesn’t really do that much boiler plate for you, but it’s there if you need it. Just right click your shared code project, and select Add New Item as you typically would. You should have a new option of adding an “Eto.Forms View”. From here you are given options on what you are trying to create (A panel, dialog or form), and how you want it to be designed (Code, XAML, Json etc).

Setup From A Terminal/Command Prompt

Alright, so you are one of “those” people that want to use a terminal/command prompt everywhere. Well that’s ok, Eto has you covered! In reality because there is no WYSIWYG designer, there isn’t any great need for Visual Studio if you prefer to use VS Code. Realistically the only difference is how you create the initial project template (Since Visual Studio extensions are going to be off the cards).

The first thing we are going to do is install the Eto.Forms templates. To do that, we run a specific dotnet command in our terminal/command prompt.

To see all available options to us, we can run :

But we are pros and know exactly what we want. So we run :

What this does is create a solution in the current directory (Named after the folder name). It creates a solution for us because of the -sln flag. The -m flag tells us which “mode” we want to use for designing our forms (In our case we want to use code). And finally -s says to create separate projects for each platform. And that’s it! We have a fully templated out project that is essentially identical to the one we created in Visual Studio.

One more thing, you should also be able to create “files” in a similar way to creating them in Visual Studio. So it does a bit of boiler plate legwork for you. To see all available options to you, you can run :

Again, it’s basically creating a class with the right inheritance and that’s about it. But it can be handy.

 

Ever since I posted a quick guide to sending email via Mailkit in .NET Core 2, I have been inundated with comments and emails asking about specific exception messages that either Mailkit or the underlying .NET Core. Rather than replying to each individual email, I would try and collate all the errors with sending emails here. It won’t be pretty to read, but hopefully if you’ve landed here from a search engine, this post will be able to resolve your issue.

I should also note that the solution to our “errors” is usually going to be a change in the way we use the library MailKit. If you are using your own mail library (Or worse, trying to open the TCP ports yourself and manage the connections), you will need to sort of “translate” the fix to be applicable within your code.

Let’s go!

Default Ports

Before we go any further, it’s worth noting the “default” ports that SMTP is likely to use, and the settings that go along with them.

Port 25
25 is typically used a clear text SMTP port. Usually when you have SMTP running on port 25, there is no SSL and no TLS. So set your email client accordingly.

Port 465
465 is usually used for SMTP over SSL, so you will need to have settings that reflect that. However it does not use TLS.

Port 587
587 is usually used for SMTP when using TLS. TLS is not the same as SSL. Typically anything where you need to set SSL to be on or off should be set to off. This includes any setting that goes along the lines of “SSLOnConnect”. It doesn’t mean TLS is less secure, it’s just a different way of creating the connection. Instead of SSL being used straight away, the secure connection is “negotiated” after connection.

Settings that are relevant to port 587 usually go along the lines of “StartTLS” or simply “TLS”. Something like Mailkit usually handles TLS for you so you shouldn’t need to do anything extra.

Incorrect SMTP Host

System.Net.Sockets.SocketException: No such host is known

This one is an easy fix. It means that you have used an incorrect SMTP hostname (Usually just a typo). So for example using smp.gmail.com instead of smtp.gmail.com. It should be noted that it isn’t indicative that the port is wrong, only the host. See the next error for port issues!

Incorrect SMTP Port

System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: A socket operation was attempted to an unreachable network [IpAddress]:[Port]

Another rather simple one to fix. This message almost always indicates you are connecting to a port that isn’t open. It’s important to distinguish that it’s not that you are connecting with SSL to a non SSL port or anything along those lines, it’s just an out and out completely wrong port.

It’s important that when you log the exception message of this error, you check what port it actually says in the log. I can’t tell you how many times I “thought” I was using a particular port, but as it turned out, somewhere in the code it was hardcoded to use a different port.

Another thing to note is that this error will manifest itsef as a “timeout”. So if you try and send an email with an incorrect port, it might take about 30 seconds for the exception to actually surface. This is important to note if your system slows to a crawl as it gives you a hint of what the issue could be even before checking logs.

Forcing Non-SSL On An SSL Port

MailKit.Net.Smtp.SmtpProtocolException: The SMTP server has unexpectedly disconnected.

I want to point out this is a MailKit exception, not one from .NET. In my experience this exception usually pops up when you set Mailkit to not use SSL, but the port itself requires SSL.

As an example, when connecting to Gmail, the port 465 is used for SSL connections. If I try and connect to Gmail with the following Mailkit code :

That last parameter that’s set to false is telling Mailkit to not use SSL. And what do you know, we get the above exception. The easy fix is obviously to change this to “true”.

Alternatively, you might be getting confused about TLS vs SSL. If the port says to connect using SSL, then you should not “force” any TLS connection.

Connecting Using SSL On A TLS Port (Sometimes)

MailKit.Security.SslHandshakeException: An error occurred while attempting to establish an SSL or TLS connection.

Another Mailkit specific exception. This one can be a bit confusing, especially because the full error log from MailKit talks about certificate issues (Which it totally could be!), but typically when I see people getting this one, it’s because they are trying to use SSL when the port they are connecting to is for TLS. SSL != TLS.

So instead of trying to connect using MailKit by forcing SSL, just connect without passing in an SSL parameter.

I should note that you will also get this exception in Mailkit if you try and force the SSLOnConnect option on a TLS Port. So for example, the following will not work :

Again, sometimes this is because the SSL connection truly is bogus (Either because the certificate is bad, expired etc), or because you are connecting to a port with no security whatsoever. But because of the rise of TLS, I’m going to say the most likely scenario is that you are trying to force SSL on a TLS port.

Another error that you can sometimes get with a similar setup is :

Handshake failed due to an unexpected packet format

This is an exception that has a whole range of causes, but the most common is forcing an SSL connection on a TLS port. If your SMTP port supports “TLS”, then do not set SSL to true.

Forcing No TLS On A TLS Port

System.NotSupportedException: The SMTP server does not support authentication.

This one is a little weird because you basically have to go right out of your way to get this exception. The only way I have seen this exception come up when you are telling Mailkit to go out of it’s way to not respect any TLS messages it gets and try and ram authentication straight away.

So for example, the following code would cause an issue :

Incorrect Username And/Or Password

MailKit.Security.AuthenticationException: AuthenticationInvalidCredentials: 5.7.8 Username and Password not accepted.

Goes without saying, you have the wrong username and password. This is not going to be any connection issues. You definitely have the right host and port, and the right SSL/TLS settings, but your username/password combination is wrong.

Other Errors?

Have something else going haywire? Drop a comment below!

Many years back, I actually started programming so that I could cheat at an online web browser based game (I know, I know). I started in Pascal/Delphi, but quickly moved onto C#. Back then, I was using things like HttpWebRequest/Response to load the webpage, and using raw regex to parse it. Looking back, some of my earlier apps I wrote were just a complete mess. Things have changed, and it’s now easier than ever to automate web requests using C#/.NET Core. Let’s get started.

Loading The Webpage

Let’s start by creating a simple console application. Immediately, let’s go ahead and change this to an async entry point. So from this :

We just make it async and return Task :

However if you’re using an older version of C#, you will get an error message that there is no valid entry point. Something like :

In which case you need to do something a little more complex. You need to turn your main method to look like the following :

What this essentially does is allow you to use async methods in a “sync” application. I’ll be using a proper async entry point, but if you want to go along with this second method, then you need to put the rest of this tutorial into your “MainAsync”.

For this guide I’m actually going to be loading my local news site, and pulling out the “Headline” article title and showing it on a console screen. The site itself is http://www.nzherald.co.nz/.

Loading the webpage is actually rather easy. I jam the following into my main method :

And run it to check the results :

Great! So we loaded the page via code, and we just spurted it all out into the window. Not that interesting so far, we probably need a way to extract some content from the page.

Extracting HTML Content

As I said earlier, in the early days I just used raw regex and with a bit of trial and error got things working. But there are much much easier ways to read content these days. One of these is to use HtmlAgilityPack (HAP). HAP has been around for years, which means that while it’s battle tested and is pretty bug free, it’s usage can be a bit “archaic”. Let’s give it a go.

To use HAP, we first need to install the nuget package. Run the following from your Package Manager Console :

The first thing we need to do is feed our HTML into an HtmlDocument. We do that like so :

Now given I wanted to try and read the title of the “headline” article on NZHerald, how do we go about finding that piece of text?

HAP (And most other parsers) work using a thing called “XPath“. It’s this sort of query language that tells us how we should traverse through an XML document. Because HTML is pretty close to XML, it’s also pretty handy here. Infact, a big part of what HAP does is try and clean up the HTML so it’s able to be parsed more or less as an HTML document.

Now the thing with XPath within an HTML document is that it can be a bit trial and error. Your starting point should be pulling up your browsers dev tools, and just inspecting the divs/headers/containers around the data you want to extract.

In my case, the first thing I notice is that the headline article is contained within a div that has a class of pb-f-homepage-hero . As I scroll down the page, this class is actually used on a few different other containers, but the first one in the HTML document is what I wanted. Ontop of that, our text is then inside a nice h3  tag within this container.

So given these facts, (And that I know a bit of XPath off the top of my head), the XPath we are going to use is :

Let’s break it down a bit.

//  says that we can start anywhere within the document (Not just the first line).
div  says that we are looking for a div that…
[contains(@class,'pb-f-homepage-hero')]  contains a “class” attribute that has the text ‘pb-f-homepage-hero’ somewhere within it.
//h3  says that within this div element, somewhere in there (Doesn’t have to be the first child, could be deeper), there will be an h3
()[1]  says just get me the first one.

We can actually test this by going here : http://videlibri.sourceforge.net/cgi-bin/xidelcgi. We can paste our HTML into the first window, our xpath into the second, and the results of our expression will come up below. There are a few XPath testers out there, but I like this one in particular because it allows for some pretty busted up HTML (Others try and validate that it’s valid XML).

So we have our XPath, how about we actually use it?

So really we can paste our entire XPath into a “SelectSingleNode” statement, and we use the “InnerText” property to get just the textual content within that node (Rather than the HTML).

If we let this run :

Done! We have successfully grabbed the headline news title! (Which is currently that a political party within NZ has nominated Simon Bridges as it’s new leader).

In a previous post we talked about the new in  keyword in C# 7.2, which got me going through the new features in C# and how they all fit together. One of these new features is the private protected  access modifier. Now you’ve heard of marking properties/variables with private , and you’ve heard of them being marked protected , but have you heard of them being marked private protected ?! Next level! Let’s take a look.

How Protected Currently Works

First, let’s look at a basic example of how the protected keyword can be used. A member that is marked with only protected, can only be accessed within derived classes, but the derived class can be in any assembly. So for example, in “assembly1” we have the following code :

In this example, our DerivedClass has no issues accessing the BaseMember because it has inherited the base class. However our ServiceClass throws a compile time error that it cannot access the member from an instantiated class. Makes sense.

In a second assembly, for arguments sake let’s call it “assembly2”, we create the create the following class :

This essentially has the same effect as what we did in assembly1. Our derived class still works even though it’s in another assembly, because protected doesn’t care about where the derived class is actually located, only that it’s inheriting from the base class. And our service is still broken as we would expected.

So that’s protected done.

How Internal Protected Works

So a little known access modifier that has been in the C# language for a while is internal protected . Now at first glance you might think that it’s an “and” type deal. Where the member is internal and protected e.g. It can only be accessed within the same assembly. But actually it’s an “or” situation. You can access the member if you are inside a derived class or you are within the same assembly.

Going back to our example. Inside assembly1 if we go ahead and change the BaseMember to be internal protected :

We can now see that inside our ServiceClass, when we instantiate the instance of the BaseClass, we can now access the BaseMember. And our DerivedClass is still trucking along as per normal. This is where the “or” comes in.

If we head to our second assembly (assembly2). And recheck our code there :

Nothing has changed. Our DerivedClass can still access the BaseMember, but our ServiceClass that is instantiating up an instance cannot.

How Private Protected Works

Now we are onto the new stuff! Before we get into explanations, let’s keep running with our example and change our BaseClass to use private protected  and see how we go.

First assembly1 :

So we have changed our BaseMember to being private protected. Our DerivedClass still doesn’t have an issue, but our ServiceClass is throwing up an error at compile time again. So far, it’s working just like how protected has always worked.

Onto assembly2 :

What have we here!? So now in our second assembly, even our DerivedClass is now throwing up an error even though it is inheriting the BaseClass. Our ServiceClass is still in strife, so nothing has changed there.

From this we can see that using private protected  means basically what I originally thought internal protected  meant. It’s protected so that only derived classes can use it, but it’s also only for use within the same assembly.

If you haven’t already gathered, I’m not a big fan of the naming. The internal  keyword already sparks off a “only within this assembly” lightbulb in a developers head, so to then use the private  keyword to limit a member across assemblies seems like an odd choice. But internal protected  was already taken, so it’s a bit of a rock and a hard place.

What do you think?

In Chart Form

In case you are still a little confused (And because things are always better in chart form), here’s a handy chart to make things just that little bit easier.

Derived Class/Same Assembly Instance/Same Assembly Derived Class/Different Assembly Instance/Different Assembly
protected Yes No Yes No
internal protected Yes Yes Yes No
private protected Yes No No No

 

I’ve recently been playing around with all of the new features packaged into C# 7.2. One such feature that piqued my interest because of it’s simplicity was the “in” keyword. It’s one of those things that you can get away with never using in your day to day work, but makes complete sense when looking at language design from a high level.

In simple terms, the in  keyword specifies that you are passing a parameter by reference, but also that you will not modify the value inside a method. For example :

In this example we pass in two parameters essentially by reference, but we also specify that they will not be modified within the method. If we try and do something like the following :

We get a compile time error:

Accessing C# 7.2 Features

I think I should just quickly note how to actually access C# 7.2 features as they may not be immediately available on your machine. The first step is to ensure that Visual Studio is up to date on your machine. If you are able to compile C# 7.2 code, but intellisense is acting up and not recognizing new features, 99% of the time you just need to update Visual Studio.

Once updated, inside the csproj of your project, add in the following :

And you are done!

Performance

When passing value types into a method, normally this would be copied to a new memory location and you would have a clone of the value passed into a method. When using the in  keyword, you will be passing the same reference into a method instead of having to create a copy. While the performance benefit may be small in simple business applications, in a tight loop this could easily add up.

But just how much performance gain are we going to see? I could take some really smart people’s word for it, or I could do a little bit of benchmarking. I’m going to use BenchmarkDotNet (Guide here) to compare performance when passing a value type into a method normally, or as an in  parameter.

The benchmarking code is :

And the results :

We can definitely see the speed difference here. This makes sense because really all we are doing is passing in a variable and doing nothing else. It has to be said that I can’t see the in  keyword being used to optimize code in everyday business applications, but there is definitely something there for time critical applications such as large scale number crunchers.

Explicit In Design

While the performance benefits are OK something else that comes to mind is that when you use in  is that you are being explicit in your design. By that I mean that you are laying out exactly how you intend the application to function. “If I pass this a variable into this method, I don’t expect the variable to change”. I can see this being a bigger benefit to large business applications than any small performance gain.

A way to look at it is how we use things like private  and readonly . Our code will generally work if we just make everything public  and move on, but it’s not seen as “good” programming habits. We use things like readonly  to explicitly say how we expect things to run (We don’t expect this variable to be modified outside of the constructor etc). And I can definitely see in  being used in a similar sort of way.

Comparisons To “ref” (and “out”)

A comparison could be made to the ref keyword in C# (And possibly to a lesser extend the out keyword). The main differences are :

in  – Passes a variable in to a method by reference. Cannot be set inside the method.
ref  – Passes a variable into a method by reference. Can be set/changed inside the method.
out  – Only used for output from a method. Can (and must) be set inside the method.

So it certainly looks like the ref keyword is almost the same as in, except that it allows a variable to change it’s value. But to check that, let’s run our performance test from earlier but instead add in a ref scenario.

So it’s close when it comes to performance benchmarking. However again, in  is still more explicit than ref because it tells the developer that while it’s allowing a variable to be passed in by reference, it’s not going to change the value at all.

Important Performance Notes For In

While writing the performance tests for this post, I kept running into instances where using in  gave absolutely no performance benefit whatsoever compared to passing by value. I was pulling my hair out trying to understand exactly what was going on.

It wasn’t until I took a step back and thought about how in  could work under the hood, coupled with a stackoverflow question or two later that I had the nut cracked. Consider the following code :

What will the output be? If you guessed 1. You would be correct! So what’s going on here? We definitely told our struct to set it’s value to 1, then we passed it by reference via the in keyword to another method, and that told the struct to update it’s value to 5. And everything compiled and ran happily so we should be good right? We should see the output as 5 surely?

The problem is, C# has no way of knowing when it calls a method (or a getter) on a struct whether that will also modify the values/state of it. What it instead does is create what’s called a “defensive copy”. When you run a method/getter on a struct, it creates a clone of the struct that was passed in and runs the method instead on the clone. This means that the original copy stays exactly the same as it was passed in, and the caller can still count on the value it passed in not being modified.

Now where this creates a bit of a jam is that if we are cloning the struct (Even as a defensive copy), then the performance gain is lost. We still get the design time niceness of knowing that what we pass in won’t be modified, but at the end of the day we may aswell passed in by value when it comes to performance. You’ll see in my tests, I used plain old variables to avoid this issue. If you are not using structs at all and instead using plain value types, you avoid this issue altogether.

To me I think this could crop up in the future as a bit of a problem. A method may inadvertently run a method that modifies the structs state, and now it’s running off “different” data than what the caller is expecting. I also question how this will work in multi-threaded scenarios. What if the caller goes away and modified the struct, expecting the method to get the updated value, but it’s created a defensive clone? Plenty to ponder (And code out to test in the future).

Summary

So will there be a huge awakening of using the in keyword in C#? I’m not sure. When Expression-bodied Members (EBD) came along I didn’t think too much of them but they are used in almost every piece of C# code these days. But unlike EBD, the in  keyword doesn’t save typing or having to type cruft, so it could just be resigned to one of those “best practices” lists. Most of all I’m interested in how in  transforms over future versions of C# (Because I really think it has to).   What do you think?

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed


Packaging up your nuget package and pushing it to your feed is surprisingly simple. The actual push may vary depending on whether you are using a hosted service such as MyGet/VSTS or using your own server. If you’ve gone with a third party service then it may pay to read into documentation because there may be additional security requirements when adding to your feed. Let’s get started!

Packaging

If you have gone with building your library in .NET Standard (Which you should), then the entire packaging process comes down to a single command. Inside your project folder, run  dotnet pack and that’s it! The pack command will restore package dependencies, build your project, and package it into a .nupkg.

You should see something along the lines of the following :

While in the previous article in this series, we talked about managing the version of your nuget package through a props file, you can actually override this version number on the command line. It looks a bit like this  :

I’m not a huge fan of this as it means the version will be managed by your build system which can sometimes be a bit hard to wrangle, but it’s definitely an option if you prefer it.

An issue I ran into early was that many tutorials talk about using nuget.exe from the command line to package things up. The dotnet pack command is actually supposed to run the same nuget command under the hood, but I continually got the following exception :

After a quick google it seemed like I wasn’t the only one with this issue. Switching to the dotnet pack command seemed to resolve it.

Publishing

Publishing might depend on whether you ended up going with a third party hosted nuget service or the official nuget server. A third party service might have it’s own way of publishing packages to a feed (All the way to manually uploading them), but many will still allow the ability to do a “nuget push” command.

The push command looks like the following :

Again this command is supposed to map directly to the underlying nuget.exe command (And it actually worked for me), but it’s better to just use the dotnet command instead.

If you are using a build server like VSTS, Team City, or Jenkins, they likely have a step for publishing a nuget package that means you don’t have to worry about the format of the push command.

Working With VSTS

For my actual nuget server, I ended up going with Microsoft VSTS as the company I am working with already had their builds/releases and source control using this service. At first i was kinda apprehensive because other VSTS services I’ve used in the past have been pretty half assed. So color me completely surprised when their package hosting service was super easy to use and had some really nifty features.

Once a feed has been created in VSTS, publishing to it is as easy as selecting it from a drop down list in your build.

After publishing packages, you can then set them to be prerelease or general release too. In a small dev shop this may seem overkill, but in large distributed teams being able to manage this release process in a more granular way is pretty nifty.

You can even unlist packages (So that they can’t be seen in the feed, but they aren’t completely deleted) see a complete history of publishes to the feed, and how many times the package has been downloaded. Without sounding like a complete shill, Microsoft definitely got this addon right.

Summary

So that’s it. From setting up the server (Or hosted service as it turned out), developing the packages (Including wrangling version numbers), and putting everything together with our build server, it’s been surprisingly easy to get a nuget server up and running. If you’ve followed along and setup your own server, or you’ve already got one up and running (Possibly on a third party service), drop a comment and let me know how it’s working out!