This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


A fairly common interview question I have for any C#/.NET Developer revolves around the difference between First and FirstOrDefault LINQ statements. There’s a general flow to the questions that basically go something like :

What’s the difference between First and FirstOrDefault

When answered, I follow it up with :

So when FirstOrDefault can’t find a value, what does it return?

Commonly, I actually have people say things like it always returns null. Which is incorrect, it returns the “default” value for that type. Essentially, doing something like default(T). But if they get this part right, then I follow it up with things like

So what is the default value then? What would be the default value for a type of integer?

The correct answer to the above is 0. I’m not sure if it’s a difficult set of questions or not (Certainly many get it wrong), but it’s definitely something you will run into a lot in your career if you develop in C# for any length of time.

One of the main reasons I ask this question, is often I see code across all levels that works something like this :

var hayStack = new List { 1, 2, 2 };
var needle = 3;

var foundValue = hayStack.FirstOrDefault(x => x == needle);
if(foundValue == 0)
{
    Console.WriteLine("We couldn't find the value");
}

This works of course, but what if needle actually is the number 0? You have to do a bit of a dance to work out was it truly not found, or is the value you are looking for actually the default value anyway. You have a couple of options :

  • Run a LINQ Any statement beforehand to ensure that your list does indeed contain the item
  • Cast the list to a nullable variant of the type if not already, so you can ensure you get null back if not found
  • Use First instead of FirstOrDefault and catch the exception

You might think this is only really an issue in edge cases where you are using primitives that typically aren’t nullable, but with the introduction of Nullable Reference types in C# 8, this is actually going to become a very common scenario.

.NET 6 introduces the concept of being able to pass in what the default value should be, should the item not be found. So for example, this code is now valid in .NET 6.

var hayStack = new List<int?> { 1, 2, 2 };
var needle = 3;

var foundValue = hayStack.FirstOrDefault(x => x == needle, -1);
if(foundValue == -1)
{
    Console.WriteLine("We couldn't find the value");
}

But… Hold on. All we are doing here is saying instead of returning 0, return -1. Oof!

As it turns out, the IEnumerable method still must return a valid integer. And because our type is not nullable, even with this extension method, we can’t force it to return null. This goes for FirstOrDefault, SingleOrDefault and LastOrDefault.

The reason for this article I guess is to first and foremost to introduce the new extension, but also secondly to say, it doesn’t quite solve the problem that I thought it would at first look. You still must return a valid type. It doesn’t make it as useful as I first thought, but at the very least, it may help in some very edge scenarios, for example upserting values.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


In a previous post, we talked about the coming ability to use global using statements in C# 10. The main benefit being that you were now able to avoid the clutter of declaring namespaces over and over (Things like using System etc) in every single file. I personally think it’s a great feature!

So it only makes sense that when you create a new .NET 6 project, that global usings are implemented right off the bat. After all, if you create a new web project, there are many many files auto generated as part of the template that will call upon things like System or System.IO, and it makes sense to just use GlobalUsings straight away from the start right?

Well… .NET 6 have solved the problem in a different way. With Implicit Using statements, your code will have almost invisible using statements declared globally! Let’s take a look at this new feature, and how it works.

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Additionally, there was a change to this feature between .NET 6 Preview and .NET 6 RC. This article has been updated as of 2021-10-07 to reflect what is currently in the very latest SDK version. If you are unsure what you have, go grab the latest SDK just to be sure.

Implicit Global Usings

Implicit Global Usings are an opt in feature (kinda), that is new to .NET 6/C# 10. For existing projects that you are upgrading to .NET 6, you will need to add the following to your csproj file :

<ImplicitUsings>enable</ImplicitUsings>

However if you create a new project inside Visual Studio 2022 or using the latest SDK from the command line, this flag has already been enabled for you! So again, it’s somewhat opt in, it’s just that you will be opted in by default when creating a new project.

When enabled, implicit usings are actually a hidden auto generated file, inside your obj folder, that declares global using statements behind the scenes. In my case, if I go to my project folder then go obj/Debug/net6.0, I will find a file titled “ProjectName.GlobalUsings.g.cs”.

Opening this file, I can see it contains the following :

global using global::System;
global using global::System.Collections.Generic;
global using global::System.IO;
global using global::System.Linq;
global using global::System.Net.Http;
global using global::System.Threading;
global using global::System.Threading.Tasks;

Note the fact this is an auto generated file, and we can’t actually edit it here. But we can see that it declares a whole heap of global using statements for us.

The project I am demoing this from is actually a console application, but each main project SDK type has their own global imports.

Console/Library

System
System.Collections.Generic
System.IO
System.Linq
System.Net.Http
System.Threading
System.Threading.Tasks

Web

In addition to the console/library namespaces :

System.Net.Http.Json
Microsoft.AspNetCore.Builder
Microsoft.AspNetCore.Hosting
Microsoft.AspNetCore.Http
Microsoft.AspNetCore.Routing
Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging

Worker

In addition to the console/library namespaces :

Microsoft.Extensions.Configuration
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Hosting
Microsoft.Extensions.Logging

Of course, if you are unsure, you can always quickly create a project of a certain type and check the obj folder for what’s inside.

If we try and import a namespace that previously appeared in our implicit global usings, we will get the usual warning.

Opting Out

As previously mentioned, if you are creating a brand new .NET 6 and C# 10 (Which in a years time, the majority will be), then this feature is turned on by default. I have my own thoughts on that, but what if you want to turn this off? This might be especially common when an automatically imported namespace has a type that conflicts with a type you yourself are wanting to declare. Or what if you just don’t like the hidden magic full stop?

Well of course we can either delete the flag all together, or set it to be disabled if we want to be a bit more explicit :

<ImplicitUsings>disable</ImplicitUsings>

Note that clearly, if we switch this flag to disabled on a live project that’s been going for some time, you’re likely to get 100’s of errors due to a project not importing the correct namespaces.

There is also another option to selectively remove (and add) implicit namespaces like so :

<ItemGroup>
  <Using Remove="System.Threading" />
  <Using Include="Microsoft.Extensions.Logging" />
</ItemGroup>

Here we are removing System.Threading and adding Microsoft.Extensions.Logging to the global implicit using imports.

This can also be used as an alternative to using something like a GlobalUsings.cs file in your project of course, but it is a somewhat “hidden” feature.

Is This A Good Feature? / My Thoughts

In the original version of this article, I was mostly down on this feature. And that’s saying something because I rarely comment on new features being good or bad. Mostly it’s because I presume that people much smarter than I know what they are doing, and I’ll get used to it. I’m sure when things like generics, lambdas, async/await got introduced, I would have been saying “I don’t get this”.

On the surface, I like the idea of project types having some sort of implicit namespace. Even imports that I thought were kinda dumb to be global such as “System.Threading.Tasks”, I soon realized are needed in every single file if you are using async/await since your methods must return a type of Task.

Originally, Implicit Usings were enabled by default *without* a flag in the csproj, and instead you had to add something like so to disable them :

<DisableImplicitNamespaceImports>true</DisableImplicitNamespaceImports>

The new version is better, in that we are opted in explicitly, but still the “hiddenness” of it all isn’t so great. I don’t like the idea of files being auto generated in the obj folder because for the most part, people do not go and look there. If you are doing a Ctrl + Shift + F inside your project to find a pesky namespace clash, you’re not going to find your auto generated implicit usings file.

I feel like a simpler idea would have been to edit the Visual Studio/Command Line templates that when you create a new web project, it automatically creates a GlobalUsings.cs file in the root of the project with the covered implicit namespaces already in there. To me, that would be a heck of a lot more visible, and wouldn’t lead to so much confusion over where these hidden imports were coming from.

That being said, maybe in a years time we just get used to it and it’s just “part of .NET” like so many other things. What’s your thoughts?

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

For a long time now, I’ve been searching for a middle ground when it comes to Optical Character Recognition/OCR in .NET/C#. You might have heard of a little C++ library called “Tesseract” which many have tried to write wrappers around or interop in their C# code. I myself have followed tutorials and guides on how to do this and it’s always ended in pain. Most notably, when you are working with C++ libraries in C#, you have to be extremely careful about how memory is allocated otherwise there is a sure fire chance you’re going to end up with a memory leak somewhere along the way. Almost without fail when I’ve tried to use Tesseract in C++, I’ve ended up leaking memory all over the place and having my screen turn into a jigsaw requiring a PC restart.

The alternative has always been “enterprise” type OCR libraries with their hidden pricing that they then hoist on you at the last minute (I really have a distaste for these sorts of tactics if I’m being honest), and even then, they usually have some sort of limited feature set but you pay for it anyway just so you don’t end up losing sleep over memory issues.

Well, then in comes IronOCR. An OCR library that takes the headaches out of C++ interoperability, but with upfront (and very very reasonable) pricing. I’ve been playing around with this little OCR library for some time now and I’ve got to say, the ease in which this thing gets up and running is really a dream. Let’s get started!

What We Are Looking For In An OCR Library?

Before we jump to deep into the code, let me map out my thought process of what I wanted to get out of any OCR or computer vision library.

  • I know that there are API services out there that do this (For example for Azure OCR, there is Azure Cognitive Services which is essentially a computer vision API), but I wanted to make sure that I could run this without making API calls, and without having to pay more as I scale up. IronOCR is a one time fee and that’s it.
  • Support multiple languages, many libraries I looked at supported English only.
  • Is there some level of “cleanup” smarts there. If the scanned document or image is a bit scratchy, does the library come with a way to clean things up?
  • Does this work for both printed and handwritten text? This is more a nice to have, but it’s also a huge feature to have!
  • Can I use this in the cloud (Specifically, will this work in Azure)? Usually this means no “installs” that need to be made because I may not be using a VM at all and instead be entirely serverless.

IronOCR ticks all of these boxes, but let’s take a dive into how it might look in code.

Simple OCR Example

Let’s start off with something really easy. I took this screenshot from the Google Books page on Frankenstein by Mary Shelly.

I then took my C#/.NET Console Application, and ran the following in the nuget package manager to install IronOCR

Install-Package IronOcr

And then onto the code. I literally OCR’d this image to extract text, including line breaks and everything, using 4 lines of code.

var ocr = new IronTesseract();
using (var Input = new OcrInput("Frankenstein.PNG"))
{
    var result = ocr.Read(Input);
    Console.WriteLine(result.Text);
}

And the output?

Frankenstein
Annotated for Scientists, Engineers, and Creators of All Kinds

By Mary Wollstonecraft Shelley - 2017

Literally perfect character recognition in just a few lines of code! When I said that using Iron was like computer vision on “easy mode”, I wasn’t lying.

Running IronOCR In An Azure Function

I know this might seem like an obvious thing to say, but there has been countless times where I’ve used libraries that require some sort of dedicated VM, either through an installation on the machine or because of licensing “per machine”. In this day and age, you should not be using any library that can’t work in any sort of serverless environment.

Since in most of my recent projects, we are using Azure Functions in a microservices architecture, let’s create a really simple function that can take a parameter of an image, OCR it, and return the text.

public static class OCRFunction
{
    public static HttpClient _httpClient = new HttpClient();

    [FunctionName("OCRFunction")]
    public static async Task<IActionResult> Run([HttpTrigger] HttpRequest req, ExecutionContext context)
    {
        var imageUrl = req.Query["image"];
        var imageStream = await _httpClient.GetStreamAsync(imageUrl);

        var ocr = new IronTesseract();
        using (var input = new OcrInput(imageStream))
        {
            var result = ocr.Read(input);
            return new OkObjectResult(result.Text);
        }
    }
}

Nice and simple! We take the query parameter of image, download it and OCR it immediately. The great thing about doing this directly inside an Azure Function is that immediately it can service different parts of our application in a microservice architecture without us having to copy and paste the code everywhere.

If we run the above code on our Frankenstein image above :

Super easy!

Another thing I want to point out about this approach is that if you’re currently paying for some service that charges a per OCR fee. Things can appear cheap but at scale, the monthly fee can quickly spiral out of control. Compare this to a one time fee with IronOCR, and you’re getting what is essentially a callable API all hosted in the Azure Cloud, without the ongoing costs.

Non-English Support

One thing I noticed with even the “Enterprise” level OCR libraries is that they often supported English only. It would come with some caveat like “But you can train it yourself on any language you want”. But that’s not really good enough when you are paying through the nose already.

However, IronOCR supports 125 languages currently, and you can add as many or as few as you like by simply installing the applicable Nuget language pack.

Iron Azure OCR Language Support

I was going to write more on the language availability in IronOCR, but it just works, and it’s all right there in a nifty package!

Cleaning Up Skewed Scans For OCR

The thing is, most of the time when you need OCR, it’s because of scanned documents. It’s very rare that you’re going to be using OCR for some pixel perfect screenshot from a website. Some OCR libraries shy away from this and sort of “avoid” the topic. IronOCR jumps right in to the deep end and gives us some out of the box options for fixing up poor scans.

Let’s use this as an example. A scanned page from the book Harry Potter.

There’s a bit of noise here but more importantly the text is heavily skewed. Two issues that are very very common when scanning in paper. If I run this through the OCR with no “fixes” in play, the only things I get back are :

Chapter Eight

The Deathday Party

That’s because the page is just too skewed and noisy to make out smaller characters correctly. All we have to do is add the ability to correct the skew of the scan. We can do that with a single line of code :

var ocr = new IronTesseract();
using (var input = new OcrInput("HarryPotter.png"))
{
    input.Deskew(); //Deskew the image
    var result = ocr.Read(input);
    Console.WriteLine(result.Text);
}

And instantly, with no other changes, we actually get things working 100% :

Chapter Eight

The Deathday Party

October arrived, spreading a damp chill over the grounds and into the castie. Madam Pomfrey, the nurse,
was kept busy by a sudden spate of colds among the staff and students. Her Pepperup potion worked
instantly, though it left the drinker smoking at the ears for several hours afterward. Ginny Weasley, who had
been looking pale, was bullied into taking some by Percy. The steam pouring from under her vivid hair gave
the impression that her whole head was on fire.

There are actually a tonne of other options for cleaning up images/documents too including :

  • DeNoise
  • Rotating images a set amount of degrees
  • Manually controlling contrast, greyscale, or simply turning the image black and white
  • Enchancing the solution/image sharpening
  • Erode and Dilate images
  • And even more like color inversion and deep cleaning of background noise.

The thing is, if I’m being honest. I did play around with these but I just never really needed to. De-skewing my documents generally was enough to get everything coming out literally character perfect, but it’s great that IronOCR give you even more knobs to play with to really fine tune your OCR requirements.

Advanced Text Results

It might surprise you that other OCR libraries I tested simply output text and that was it. There was no structure to it, and you essentially had to work out based on counting line breaks or whitespace how each paragraph worked.  IronOCR however not only can read text from your documents, but can work out the structure too!

For example, let’s use our Harry Potter image and instead use the following code :

var ocr = new IronTesseract();
using (var input = new OcrInput("HarryPotter.png"))
{
    input.Deskew();
    var result = ocr.Read(input);
    foreach(var paragraph in result.Paragraphs)
    {
        Console.WriteLine($"Paragraph : {paragraph.ParagraphNumber}");
        Console.WriteLine(paragraph.Text);
    }
}

Notice how instead of simply spitting out the text, I want to go paragraph by paragraph, to really understand the blocks of text I’m working with. And the result?

Paragraph : 1
Chapter Eight
Paragraph : 2
The Deathday Party
Paragraph : 3
October arrived, spreading a damp chill over the grounds and into the castie. Madam Pomfrey, the nurse,
was kept busy by a sudden spate of colds among the staff and students. Her Pepperup potion worked
instantly, though it left the drinker smoking at the ears for several hours afterward. Ginny Weasley, who had
been looking pale, was bullied into taking some by Percy. The steam pouring from under her vivid hair gave
the impression that her whole head was on fire.

Again, character perfect recognition split into the correct blocks. There’s a tonne of options around this too including reading line by line, or even reading only certain sections of the text a time by drawing a rectangle over the document. The latter is extremely helpful when you only need to use computer vision on a particular section of the document, and don’t need to worry about the rest.

Who Is This Library For?

As always, when I look at these sorts of libraries I try and think about who is this actually aimed at. Is it a hobbyist library, is it for enterprises only. And honestly, I struggle to place this one. Computer vision and optical character recognition is on the rise, and in the past couple of years, I’ve been asked about libraries to extract text from images more than all previous years combined. Azure obviously has their own offering, but it’s on a per call basis and over time, that all adds up. Add to the fact that you really don’t have control over how it’s trained and it’s not an easy sell.

However, going with IronOCR you have all of the control, with a single one time price tag. Add to the fact that you can download this library today and test to your hearts content before buying, it really makes it a no brainer if you are looking for any sort of text extraction/OCR features.


This is a sponsored post however all opinions are mine and mine alone. 

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I recently came across a feature that was introduced in C# 8. It actually threw me through a loop because I first saw it used in some code I was reviewing. In a moment of “confidently incorrect”, I told the developer that they definitely had to change the code because clearly it wouldn’t work (If it would compile at all).

Of course the egg on my face ensued with a quick link to the documentation and a comment along the lines of “Shouldn’t you have written this on that blog?”.

Oof.

The feature I am talking about is “Using Declarations”. I think every time I’ve seen this feature mentioned, I’ve thought it referred to your standard usings at the top of your .cs file. Something like this :

using System;
using System.IO;

But in fact, it revolves around the use of using to automatically call dispose on objects. The code in question that I was reviewing looked something like this (And note, this is a really dumb example so don’t read into the code too much).

static int countLines()
{
    using var fileStream = File.OpenRead("myfile.txt");
    using var fileReader = new StreamReader(fileStream);
    var lineCount = 1;
    var line = string.Empty;
    while((line = fileReader.ReadLine()) != null)
    {
        lineCount++;
    }
    return lineCount;
}

I mentioned to the developer…. “Well how does that using statement work?” I had just assumed it functioned much like how an If statement works without braces. e.g. It will only affect the next line like so :

if(condition)
    DoThis();
ButThisIsNotAffectedByCondition();

But in fact this is a new way to do using statements without braces.

Now, placing a using statement like so :

using var fileStream = File.OpenRead("myfile.txt");

Actually means that the object will be disposed when control leaves the scope. The scope could be a method, a loop, a conditional block etc. In general, if it leaves an end brace } somewhere, the object will be disposed.

Why is this handy? Well without it, you would have to have large amounts of indenting, and generally that indentation is for the entire method body anyway. For example :

static int countLines()
{
    using (var fileStream = File.OpenRead("myfile.txt"))
    {
        using (var fileReader = new StreamReader(fileStream))
        {
            var lineCount = 1;
            var line = string.Empty;
            while ((line = fileReader.ReadLine()) != null)
            {
                lineCount++;
            }
            return lineCount;
        }
    }
}

Much less prettier, and it doesn’t afford us anything extra (In this case) than just doing the using declaration.

The one thing you may still want to use braces for is if you wish to really control when something is disposed, even without code leaving the current control scope. But otherwise, Using Declarations are a very nifty addition that I wish I knew about sooner.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


I don’t often get hate comments/emails on this blog, but when I do, it’s about using statements not being available in my code snippets. For me, they just clutter everything up and rarely add much value or description to the code. What does several imports from system really add? Especially when everyone these days is using a fully featured IDE that has a one click “Add Using” option when you hover over anything.

Well with .NET 6 / C#10, I now have an excuse with the introduction of global using statements. Now you can place all of your common using statements (So mostly your System imports) into a single file that will automatically be available for use within that entire project. It’s a simple and nifty change!

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Additionally, you may need to edit your .csproj file to allow for the preview LangVersion like so :

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net6.0</TargetFramework>
    <LangVersion>preview</LangVersion>
  </PropertyGroup>
</Project>

With all of that done, you should be ready to go!

Adding Global Usings

The syntax for adding global usings is actually fairly straight forward, simply add the keyword global before any using statement, anywhere, to make it global.

global using System;

Interestingly, there isn’t a well defined place to put these yet. You can put these in any file within your project and they are immediately global everywhere (As opposed to say the standard AssembleInfo.cs we used to have).

That being said, I’ve seen many people using a file in the root of your project called GlobalUsings.cs, with something akin to the following in there :

global using System;
global using System.Collections.Generic;
global using System.IO;
global using System.Linq;

//Maybe even a global using of another projects folder folder for instance. 
// global using MyModelsProject.Models;

Again, there is no limit to how many usings you can place in here, nor where this file lives, but it’s generally a good idea to keep this list pretty concise only to what you need. That being said, the only downside to overloading these global usings is that your intellisense will be astronomical in every file, whether that’s actually a bad thing I don’t know.

And that’s it!

I’ll note that one of the reasons I love C# and .NET right now is that every single change has a pretty lively discussion on Github. Global Usings is no different and the discussion is out in the public here : https://github.com/dotnet/csharplang/issues/3428. But what’s your thoughts? Drop a comment below!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

A friend recently asked if I could help build a very simple image manipulation console app.

In short, he needed to take a folder full of RAW image files, and generate a smaller thumbnail in various formats (PNG, JPEG etc). Seemed like straight forward and (wrongly), I just assumed that .NET could handle RAW image files natively by now. As it turns out, I was very wrong.

What Image Types Can .NET Handle Natively?

From my research, .NET out of the box (Including Framework, Core and .NET 5+) could handle the following image types :

  • BMP (Bitmap)
  • EMF (Enhanced Metadata File)
  • EXIF (Exchangeable Image File)
  • GIF (Needs no explanation!)
  • ICON (Windows Icon Image Format)
  • JPEG (Again, no explanation necessary)
  • PNG
  • TIFF
  • WMF (Windows Metadata File)

Notice no “RAW” there. Raw itself actually has a few extensions including .CR2 which is Canon’s proprietary format, along with .NEF, .ORF, .ERF, .SRW and .SR2. The majority of these are just raw formats from a given camera manufacturer. The “Raw” format itself is almost just an overarching term for the images coming directly out of a camera without any processing.

Using ImageMagick

The only sensible option I found out there was to use the Magick.NET wrapper around ImageMagick. ImageMagick itself is a image processing library that is available in many languages (Especially common in PHP), and so it’s a pretty feature rich option.

Other options for .NET mostly included using command line applications, but just triggering them from .NET code using Process.Start. I want to avoid this sort of malarky (Even if at times, that’s what Magick.NET is actually doing behind the scenes).

To install Magick.NET, we first need to install the appropriate nuget package. In reality, there are packages for each CPU type (X86/X64) and different quality types (Q8, Q16, Q16HDRI). I was thinking I would try and go middle of the road and as it happens, that was also the most popular package on Nuget.

So from our Package Manager Console :

Install-Package Magick.NET-Q16-AnyCPU

From there, our code is actually extremely simple :

using (var image = new MagickImage("RAW_CANON_EOS_1DX.CR2"))
{
    var geometry = new MagickGeometry();
    geometry.IgnoreAspectRatio = false;
    geometry.Width = 500;
    image.Resize(geometry);
    image.Write("output.jpg");
}

This generates thumbnails (500px wide) from our images while keeping the same aspect ratio. Pretty nifty!

A quick note on testing all of this too. While building this, I didn’t have a camera myself that could generate raw images, so instead I used https://rawsamples.ch to check that various formats would work through ImageMagick (And they all seemed fine!). But if you aren’t sure which “raw” format you should be using, you can of course ask for the type of camera, and match it up to a couple of samples from the site. Worked a treat for me!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces


While looking through changes mentioned in .NET 6, I noticed that there was a new “Chunk” method on IEnumerable. This method was actually kindof simple, and yet powerful. Immediately I knew of a bunch of places in my code where I would love to use this, especially where I do parallel processing.

So of course, why not a quick write up to talk a little bit more about this utility!

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

IEnumerable.Chunk In A Nutshell

At it’s simplest, we can write code like so :

var list = Enumerable.Range(1, 100);
var chunkSize = 10;
foreach(var chunk in list.Chunk(chunkSize)) //Returns a chunk with the correct size. 
{
    foreach(var item in chunk)
    {
        Console.WriteLine(item);
    }
}

Allowing us to “chunk” data back. I’ve often done similar code when I want to “batch” things, Especially when running things in parallel like so :

var list = Enumerable.Range(1, 100);
var chunkSize = 10;
foreach(var chunk in list.Chunk(chunkSize)) //Returns a chunk with the correct size. 
{
    Parallel.ForEach(chunk, (item) =>
    {
        //Do something Parallel here. 
        Console.WriteLine(item);
    });
}

You’re probably thinking, well why not use Skip and Take? Which is true, I think this is just a bit more concise and makes things just that little bit more readable.

I was thinking that this would be a guide full of examples, and I actually had more here, but there’s nothing really to it. Chunk just does what it says on the tin, and is a welcome addition to the LINQ family.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

True randomness has always been something I struggled to get my head around. Even how we might get closer to being “true random” is also slightly befuddling to me. And that’s why when I was recently asked “What’s the best way to generate a random number in .NET”, a shiver went down my spine remembering the chat boards back in the day flaming people because “The Random class in .NET is not truly random!!”.

What does it mean to be truly random? Or atleast close to it? In practical terms, it means that if we generated a billion numbers in a row, there would be no predictability or uniform to them. They wouldn’t bias in any particular direction overall, but they also wouldn’t be predictable (even within a range) on what the next number would be.

So how do we achieve this in .NET?

The Random Class

For a long time now, we’ve been able to use the Random class to generate “random” numbers. For example :

var randomGenerator = new Random();
randomGenerator.Next(1, 1000000);

This generates us a random number between 1 and 1 million.

However, the Random in C# uses a “seed” value that then uses an algorithm to generator numbers from that seed. Given the same seed value, you would end up with the same number. For example :

var randomGenerator = new Random(123);
var random1 = randomGenerator.Next(1, 1000000);

randomGenerator = new Random(123);
var random2 = randomGenerator.Next(1, 1000000);

Console.WriteLine(random1 + " - " + random2); //Will output the same number. 

And so you probably say, well then just don’t give it the same seed? Easy enough right? Well actually it always has a seed whether you like it or not. When the Random constructor is not given a seed, in .NET Framework it uses the millisecond portion of the time as a seed, and in .NET Core/.NET 5+, it uses a pseudo random number generator.

What this means in .NET Framework according to the documentation :

On most Windows systems, Random objects created within 15 milliseconds of one another are likely to have identical seed values

But this only occurs if you construct your random generator multiple times, if you instead use the same instance of random each time, then this type of “collision” won’t occur :

var randomGenerator = new Random();
var random1 = randomGenerator.Next(1, 1000000);
var random2 = randomGenerator.Next(1, 1000000); //Different number because we are using the same Random instance

I may have been waffling here. But the point is that the Random class in .NET is “good” but not “truly random”. If you are using the same instance of Random in your code to generate multiple random numbers, chances are you will be fine, but it’s still not ideal.

So what can we use if you want more randomness?

Using The RandomNumberGenerator Class

Instead of using the Random class, you can use the RandomNumberGenerator class… Which admittedly is a little annoying to have two. But let’s take a look!

This class has actually been around in all versions of .NET, but in .NET Core 3+,  it got some love with additional helper methods added. So instead of working with random bytes, you can get your random numbers handed right back to you.

var random = RandomNumberGenerator.GetInt32(1, 1000000);

This random number generator is built ontop of the cryptography API’s to be as truly random as possible. I could go on about this (And probably get it all wrong), but if you need something secured by randomness (For example password salt), then you should be using the RandomNumberGenerator and not the Random class.

Performance

So that probably leaves you thinking, why even bother ever using the Random class? Surely if I can get *more* randomness from the RandomNumberGenerator, I should always just use that. Well, it does come at a cost.

Using the following benchmark setup :

public class RandomBenchmark
{
    private static Random _random = new Random();

    [Benchmark]
    public int RandomRecreate() => new Random().Next(1, 1000000);

    [Benchmark]
    public int RandomReuse() => _random.Next(1, 1000000);

    [Benchmark]
    public int RandomGenerator() => RandomNumberGenerator.GetInt32(1, 1000000);
}

We get the following results :

MethodMeanErrorStdDev
RandomRecreate1,631.14 ns8.217 ns7.284 ns
RandomReuse12.02 ns0.001 ns0.001 ns
RandomGenerator74.81 ns0.473 ns0.395 ns

I’ve included the “Recreate” benchmark here just to show how expensive it is to new up a new Random class (Even though, as I’ve already explained, you do not want to do so). But what we are really looking at is RandomReuse vs RandomGenerator. Between the two, using RandomGenerator is 6x slower than using the Random class. We are still talking about nanoseconds between the two so for most use cases, it’s not going to make a difference, but it is something to be aware of.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Sorry for the absolute word soup of a title on this post, but I wasn’t sure how else to describe this. Let me explain! In .NET Core/.NET 5+, you have this concept of “Environments”. Environments are used to describe your different environments or stages on the way to production. For example, local development, test, UAT, staging, production etc. You can have an unlimited number of environments, and there are no rules about what you call these environments (Well… Kinda as we shall soon see). Among other things, you can use environments to swap configurations, or do something only in production etc.

Recently, I was helping a developer debug an issue that he claimed was only happening on one environment (our CD environment in Azure), but could not be replicated in local development or in any environment beyond CD (e.x. Test did not have the same issue). I was a little skeptical but checked it out.

The error itself was the famed “Cannot Consume Scoped Service From Singleton“. It was actually a relatively easy fix in the end, but we were still puzzled why this issue only occurred on this one environment, and not others.  It puzzled me for some time until I set up the code locally to try and replicate things, and realized that the naming scheme of various environments was… off.

Before we get to the solution, let’s take a step back!

“Known” .NET Environments

So earlier, and you can quote me on this, I said :

There are no rules about what you call these environments

Let’s walk that back a little. Theoretically you can call an environment anything, but there is actually only 3 “pre-built” environments in .NET. Those are Development, Staging and Production. We know this because we can use code like so :

env.IsDevelopment();
env.IsProduction();
env.IsEnvironment("Test1");//Checks the environment is Test1

Makes sense. And if we check the source code of .NET to see what these actually do :

public static bool IsDevelopment(this IHostingEnvironment hostingEnvironment)
{
	if (hostingEnvironment == null)
	{
		throw new ArgumentNullException(nameof(hostingEnvironment));
	}

	return hostingEnvironment.IsEnvironment(EnvironmentName.Development);
}

Where Environment.Development is just a static const of “Development”. So under the hood, if we use env.IsDevelopment(), all we are really checking is that the Environment matches the string of “Development”.

Again, theoretically these are just helper methods so do they actually matter? At first I thought not but let’s keep going.

I noticed that for local development, the team was using “Local” as the environment, not “Development”. When I asked why? They mentioned that they normally name their first CD environment as “Development” instead.

At first, this seems like a non issue right? I mean a developer can name their environments however they like, and when we want to check our code for which environment we are in, we just need to say :

env.IsEnvironment("Local");

I thought it really doesn’t make much difference to us? But that’s us. What about the underlying .NET code written by Microsoft?

Framework IsDevelopment Checks

By this point, I was pretty sure I had things figured out, but I had to prove it. My first thought was, I know that Microsoft expects your local development to be called “Development” exactly. Not “Local”, not “Dev” not “DevMachine”, but “Development”. I know this because when we create a new ASP.NET Core web application, we get boilerplate code that looks like so :

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}

So I went on the hunt through the .NET source code (Again, I cannot tell you how good it is to have .NET open sourced!), and found this :

bool isDevelopment = context.HostingEnvironment.IsDevelopment();
options.ValidateScopes = isDevelopment;
options.ValidateOnBuild = isDevelopment;

Following this trail, the setting of “ValidateScopes” to true is actually what triggers the check for our “Cannot consume scoped service from singleton” exception. And because our local development was called “Local” and our CD environment called “Development”, we only saw this trigger in CD and not on our developer machines! Bingo!

Obviously calling our CD environment “Development” was probably wrong, but I also think that there are a tonne of developers calling their local environment all sorts of things, without realizing that deep in the framework, there are indeed checks to see if the environment is explicitly called “Development” and to make checks accordingly.

In short, for local development on your machine, you should always use the exact environment name of “Development”.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This post is part of a series on .NET 6 and C# 10 features. Use the following links to navigate to other articles in the series and build up your .NET 6/C# 10 knowledge! While the articles are seperated into .NET 6 and C# 10 changes, these days the lines are very blurred so don’t read too much into it.

.NET 6

Minimal API Framework
DateOnly and TimeOnly Types
LINQ OrDefault Enhancements
Implicit Using Statements
IEnumerable Chunk
SOCKS Proxy Support
Priority Queue
MaxBy/MinBy

C# 10

Global Using Statements
File Scoped Namespaces

 

Around 6-ish years ago, NodeJS really hit it’s peak in popularity. In part, it was because people had no choice but to learn Javascript because of the popularity of front end JS frameworks at the time, so why not learn a backend that uses Javascript too? But also I think it was because of the simplicity of building API’s in NodeJS.

Remember at the time, you were dealing with .NET Framework’s bloated web template that generated things like an OWIN pipeline, or a global.asax file. You had things like MVC filters, middleware, and usually we were building huge multi tier monolithic applications.

I remember my first exposure to NodeJS was when a company I worked for was trying to build a microservice that could do currency conversions. The overhead of setting up a new .NET Framework API was overwhelming compared to the simplicity of a one file NodeJS application with a single endpoint. It was really a no brainer.

If you’ve followed David Fowler on Twitter at any point in the past couple of years, you’ve probably seen him mention several times that .NET developers have a tendency to not be able to create minimal API’s at all. It always has to be a dependency injected, 3 tier, SQL Server backed monolith. And in some ways, I actually agree with him. And that’s why, in .NET 6, we are getting the “minimal API” framework to allow developers to create micro APIs without the overhead of the entire .NET ecosystem weighing you down.

Getting Setup With .NET 6 Preview

At the time of writing, .NET 6 is in preview, and is not currently available in general release. That doesn’t mean it’s hard to set up, it just means that generally you’re not going to have it already installed on your machine if you haven’t already been playing with some of the latest fandangle features.

To get set up using .NET 6, you can go and read out guide here : https://dotnetcoretutorials.com/2021/03/13/getting-setup-with-net-6-preview/

Remember, this feature is *only* available in .NET 6. Not .NET 5, not .NET Core 3.1, or any other version you can think of. 6.

Introducing The .NET 6 Minimal API Framework

In .NET 5, top level programs were introduced which essentially meant you could open a .cs file, write some code, and have it run without namespaces, classes, and all the cruft holding you back. .NET 6 minimal API’s just take that to another level.

With the .NET 6 preview SDK installed, open a command prompt in a folder and type :

dotnet new web -o MinApi

Alternatively, you can open an existing console application, delete everything in the program.cs, and edit your .csproj to look like the following :

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>net6.0</TargetFramework>
  </PropertyGroup>
</Project>

If you used the command to create your project (And if not, just copy and paste the below), you should end up with a new minimal API that looks similar to the following :

using Microsoft.AspNetCore.Builder;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/hello", () => "Hello, World!");

app.Run();

This is a fully fledged .NET API, with no DI, no configuration objects, and all in a single file. Does it mean that it has to stay that way? No! But it provides a much lighter weight starting point for any API that needs to just do one single thing.

Of course, you can add additional endpoints, add logic, return complex types (That will be converted to JSON as is standard). There really isn’t much more to say because the idea is that everything is simple and just works out of the box.

Adding Dependency Injection

Let’s add a small addition to our API. Let’s say that we want to offload some logic to a service, just to keep our API’s nice and clean. Even though this is a minimal API, we can create other files if we want to right?!

Let’s create a file called HelloService.cs and add the following :

public class HelloService
{
    public string SayHello(string name)
    {
        return $"Hello {name}";
    }
}

Next, we actually want to add a nuget package so we can have the nice DI helpers (Like AddSingleton, AddTransient) that we are used to. To do so, add the following package but ensure that the prerelease box is ticked as we need the .NET 6 version of the package, not the .NET 5 version.

Next, let’s head back to our minimal API file and make some changes so it ends up looking like so :

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<HelloService>(new HelloService());

var app = builder.Build();

app.MapGet("/hello", (HttpContext context, HelloService helloService) => helloService.SayHello(context.Request.Query["name"].ToString()));

app.Run();

Here’s what we’ve done :

  • We added our HelloService as a dependency to our service collection (Much like we would with a full .NET API)
  • We modified our API endpoint to inject in our HttpContext and our HelloService
  • We used these to generate a response out, which should say “Hello {name}”. Nice!

We can obviously do similar things if we wish to load configuration. Again, you’re not limited by using the minimal API template, it’s simply just a way to give you an easier boilerplate for micro APIs that don’t come with a bunch of stuff that you don’t need.

Taking Things Further

It’s very early days yet, and as such, the actual layout and code required to build minimal API’s in .NET 6 is changing between preview releases. As such, be careful reading other tutorials out on the web on the subject, because they either become outdated very quickly *or* more often than not, they guess what the end API will look like, and not what is actually in the latest release. I saw this a lot when Records were introduced in C# 9, where people kinda “guessed” how records would work, and not how they actually did upon release.

So with that in mind, keep an eye on the preview release notes from Microsoft. The latest version is here : https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-6/ and it includes how to add Swagger to your minimal API. Taking things further, don’t get frustrated if some blog you read shares code, and it doesn’t work, just keep an eye on the release notes and try things as they come out and are available.

Early Adopter Bonus

Depending on when you try this out, you may run into the following errors :

Delegate 'RequestDelegate' does not take 0 arguments

This is because in earlier versions of the minimal framework, you had to cast your delegate to a function like so :

app.MapGet("/", (Func)(() => "Hello World!"));

In the most recent preview version, you no longer have to do this, *but* the tooling has not caught up yet. So building and running via Visual Studio isn’t quite working. To run your application and get past this error, simply use a dotnet build/dotnet run command from a terminal and you should be up and running. This is actually a pretty common scenario where Visual Studio is slightly behind an SDK version, and is just what I like to call an “early adopter bonus”. If you want to play with the latest shiny new things, sometimes there’s a couple of hurdles getting there.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.