I recently made an appearance on an Adventures In .NET Podcast. The podcast is ran by devchat.tv who you may also know from the Javascript Jabber podcast (As well as a tonne of other really great programming podcasts!).

I was a bit of a last minute guest so a whole range of topics are discussed all the way from how I got into programming, how this blog started, and the things I’m most excited about in the .NET Core 3.0 release. You can check out the podcast from :

Or just use the player below!

This article is part of a series on creating Windows Services in .NET Core.

Part 1 – The “Microsoft” Way
Part 2 – The “Topshelf” Way


In our previous piece on creating Windows Services in .NET Core, we talked about how to do things the “Microsoft” way. What we found was that while it was simple to get up and running, it was much harder to debug our service. Infact I would say borderline impossible.

And that’s where Topshelf comes in. Topshelf is a .NET Standard library that takes a tonne of the hassle out of creating Windows Services in both .NET Framework and .NET Core. But rather than recite the sales pitch to you, let’s jump right in!

Setup

Similar to our “Microsoft” method, there is no Windows Service or Topshelf “Visual Studio Template”. We instead just create a regular old .NET Core console application.

Then from our Package Manager Console, we run the following to install the Topshelf libraries.

The Code

We essentially want to recreate our previous code sample to work with Topshelf. Here’s how we do that :

All rather simple.

We inherit from the “ServiceControl” class (Which isn’t actually needed but it just provides a good base for us to work off). We have to implement the two methods to start and stop, and we just log those methods as we did before.

In our Main method of program.cs, it’s actually really easy. We can just use the HostFactory.Run method to kick off our service with minimal effort :

Crazy simple. But that’s not the only thing HostFactory can do, for example I may want to say that when my service crashes, I want to restart the service after 10 seconds, give it a nicer name, and set it to start automatically.

I could go on and on but just take a look through the Topshelf documentation for some configuration options. Essentially anything you would normally have to painfully do through the windows command line, you can set in code : https://topshelf.readthedocs.io/en/latest/configuration/config_api.html

Deploying Our Service

Same as before, we need to publish our app specifically for a Windows environment. From a command prompt, in your project directory, run the following :

Now we can check out the output directory at bin\Release\netcoreappX.X\win-x64\publish and we should find that we have a nice little exe waiting for us to be installed.

Previously we would use typical SC windows commands to install our service, but Topshelf utilizes it’s own command line parameters for installing as a service. Almost all configuration that you can do in code you can also do from the command line (Like setting recovery options, service name/description etc). You can check out the full documentation here :

For us, we are just going to do a bog standard simple install. So in our output directory, I’m going to run the following from the command line : http://docs.topshelf-project.com/en/latest/overview/commandline.html

Where WindowsServiceExample.exe is my project output. All going well my service should be installed! I often find that even when setting the service to startup automatically, it doesn’t always happen. We can actually start the service from the command line after installation by running :

In deployment scenarios I often find I have to install the service, wait 10 seconds, then attempt to start it using the above and we are away laughing.

Debugging Our Service

So when doing things the “Microsoft” way, we ran into issues around debugging. Mostly that we had to either use command line flags, #IF DEBUG directives, or config values to first work out if we are even running inside a service or not. And then find hacky ways to try and emulate the service from a console app.

Well, that’s what Topshelf is for!

If we have our code open in Visual Studio and we just start debugging (e.g. Press F5), it actually emulates starting the service in a console window. We should be met with a message saying :

This does exactly what it says on the tin. It’s started our “service” and runs it in the background as if it was running as a Windows Service. We can set breakpoints as per normal and it will basically follow the same flow as if it was installed normally.

We can hit Ctrl+C and it will close our application, but not before running our “Stop” method of our service, allowing us to also debug our shutdown procedure if required. Compared to debug directives and config flags, this is a hell of a lot easier!

There is just one single gotcha to look out for. if you get a message similar to the following :

This means that the service you are trying to debug in Visual Studio is actually installed and running as a Windows Service on the same PC. If you stop the Windows Service from running (You don’t have to uninstall, just stop it), then you can debug as normal.

What’s Next

A helpful reader pointed out in the previous article that .NET Core actually has a completely different way of running Windows Services. It essentially utilizes the “Hosted Services” model that has been introduced into ASP.NET Core and allows them to run as Windows Services which is pretty nifty!

While helping a friend get used to EF Core, I noticed that he wasn’t using “Projections” in his LINQ code. That is, he was essentially doing a “SELECT *” and grabbing the entire model instead of just getting the few fields he needed.

“You’ll create a lot of Key Lookups doing that though…”
“What’s a Key Lookup?”

I guess I had forgotten that people starting software development these days often think that they don’t need to learn SQL because they have a fan-dangle ORM that does all the heavy lifting for them! So this is for them. Here’s what a Key Lookup is from a developer’s perspective!

This is a pretty light explanation because it reads more like a brain dump trying to help a friend understand what I’m saying rather than a tutorial, but it should be a great kickstart.

An Index Tuning Crash Course

The easiest way to understand a key lookup is to understand how indexes get used. We’ll start that by creating a table that looks like so :

You don’t need to actually create this table as we’ll be just looking at the theory more than anything!

Now let’s say that we want to find all lastnames where someone has a firstname of “John”. We know that our SQL would look like :

Now we probably already know that if we have a WHERE clause on a non-PK column, we are going to want to create an index so we can find the data easier. A beginner developer might think that the following index is suitable :

Afterall, we’ve now created an index on the Person table with “FirstName”. But with a large amount of data in this table, and us running that query again, we can check the execution path and see the following :

So we get our seek on our index, but for some reason we then do what’s called a “Key Lookup” back on our Clustered Index. Why?

If we mouse over the Key Lookup and look at the “Output List”, it tells us what it was trying to find. In this case, it’s the “LastName”.

What has happened is that our query asks for the LastName of a person, where the Firstname is John. It can query on our index to find people named John, but once it finds them, because our index only contains Firstname (And by default the Id), it has to go back to the Clustered Index to actually find the data we want.

What we can instead do is change our index to “Include” the LastName column. The Include keyword tells SQL that we want the “data” of the column included in the index, but it doesn’t actually need to be searchable. Essentially exactly what our query is doing. It’s “searching” on FirstName, but actually needs the “data” LastName.

Notice that we also renamed the Index to include what columns we are using in the “Include”. This is something I like to do but it can get unweildy as you add columns so your mileage could vary.

If we re-run this query and check the execution plan :

We are now going to the index, getting the exact data we need and returning. We don’t need to go anywhere else to fetch our data because it’s right there in the index.

Over Selecting And SELECT *

So that brings us to what I call Over Selecting and the notorious SELECT * issue. If we think about the above, what if instead of just getting the LastName, let’s say I ran the following :

Well now we are back to doing Key Lookups even if we include LastName in the index because we are missing DateOfBirth in the Index. Since it will be included in the SELECT * result set even if we don’t use it in our C# code, we would need to add it in.

But then we add another column. So we have to add that to the Index. And repeat. But are we even using those extra fields? And our index is getting pretty big now, and BLAH.

This is the key reason (for me atleast), why I only select the exact columns I need and never use SELECT *

Memory Footprint And Data Transfer

While I won’t dive into it too much, SELECT * also has performance implications just because of the amount of wasted data you are both fetching and transfering. Even if it’s in the Index, you are working with data you actually don’t need.

Here’s a great article on the amazingly named blog “Use The Index Luke” : https://use-the-index-luke.com/blog/2013-08/its-not-about-the-star-stupid. It should hopefully explain it 10x better than I ever could.

Projecting Entities

Here is where I would normally write about the zillion ways to “Project Entities” in EF Core so that you don’t run into Key Lookups, but while having a quick google around for what’s out there already, I found this incredible tutorial on Projection Magic For EF Core : https://benjii.me/2018/01/expression-projection-magic-entity-framework-core/

Again, written a heck of a lot better than I could and it’s a great reference any time you are struggling with complex projections.

An Actual SQL Tutorial

Where did I learn all of this? By the SQL Master himself “Pinal Dave” on PluralSight over here : https://www.pluralsight.com/courses/query-tuning-introduction. If the name Pinal Dave doesn’t ring a bell, his website (https://blog.sqlauthority.com/) probably does. You can’t google a SQL error message without his blog showing up on the first page. The course on Pluralsight does a great job of explaining complex SQL tuning topics in bite size chunks (Often less than 5 minutes at a time), and I highly recommend it.

This article is part of a series on creating Windows Services in .NET Core.

Part 1 – The “Microsoft” Way
Part 2 – The “Topshelf” Way


Creating Windows Services to do batch jobs or in general do background work used to be a pretty common pattern, but you don’t often come across them anymore due to the proliferation of cloud services such as Amazon Lambda, Azure WebJobs or Azure Functions taking their place. Personally, I’m a big fan of using Azure WebJobs these days as it basically means I can write a console application without any thought to running it in the cloud, and then with a single batch file, turn it into an automated job that can run 24/7.

But maybe you’re still on bare metal. Or maybe you have a bunch of legacy apps that are running as Windows Services that you want to convert to .NET Core, but not go the whole way to converting them to “serverless”. This is the tutorial for you.

In many respects, a Windows Service in .NET Core is exactly the same as one in .NET Framework. But there are a few little tricks that you might stumble across along the way. Furthermore, there is the “Microsoft” way of building a Windows Service which we will look at in this article, and 3rd party libraries such as TopShelf that make the process much easier (We will look at Topshelf in Part 2).

Setup

There is no Visual Studio “template” for creating a Windows Service, so instead create a regular .NET Core console application project.

Once created, we need to install a nuget package that adds in a bunch of Windows specific API’s into .NET Core. These are API’s that are actually already available in full framework, but many of them are very specific to Windows e.x. Windows Services. For that reason they aren’t in the .NET Core base, but can be added via a single handy dandy package. (You can read more about the package here : https://devblogs.microsoft.com/dotnet/announcing-the-windows-compatibility-pack-for-net-core/)

Let’s go ahead and run the following from our Package Manager Console :

The Code

What we are most interested in from the above Nuget is the ServiceBase class. This is a base class for writing Windows Services, and provides “hooks” for events involved in the service e.g. Start, Stop, Pause etc.

We are going to create a class in our code that does a simple logging job to a temp file, just so we can see how it works. Our code looks like :

So you’ll notice we are inheriting from the “ServiceBase”, and we are just overriding a couple of events to log. A typical pattern would involve kicking off a background thread “OnStart”, and then aborting that thread “OnStop”. Heavy lifting should not be happening inside the OnStart event at all!

From our Main entry method, we want to kick off this service. It’s actually pretty easy :

That’s it!

Deploying Our Service

When publishing our service, we can’t just rely on a bog standard Visual Studio build to get what we want, we need to be building specifically for the Windows Runtime. To do that, we can run the following from a command prompt in the root of our project. Notice that we pass in the -r flag to tell it which platform to build for.

If we check our /bin/release/netcoreappX.X/publish directory, we will have the output from our publish, but most importantly we will have an exe of our application in here. If we didn’t specify the runtime, we would instead get a .NET Core .dll which we are unable to use as a Windows Service.

We can move this publish directory anywhere to install, but let’s just work directly from the publish folder for now.

For this next part, you will need to open a command prompt as an administrator. Then use the following command :

The SC command is a bog standard windows command (Has nothing to do with .NET Core), that installs a windows service. We say that we are creating a windows service, and we want to call it “TestService”. Importantly, we pass in the full path to our windows service exe.

We should be greeted with :

Then all we need to do is start our service :

We can now check our log file and see our service running!

To stop and delete our service, all we need to do is :

Debugging Our Service

Here is where I really think doing things the Microsoft way falls down. Debugging our service becomes a real chore.

For starters our overridden methods from the ServiceBase are set to be protected. This means that we can’t access them outside our class, which makes debugging them even harder. The best way I found was to have a public method for each event that the protected method also calls. Bit confusing, but works like this :

This atleast allows us to do things like :

This to me is rough as hell involves lots of jiggery pokery to get things going.

Your other option is to do a release in debug mode, actually install the service, and then attach the debugger. This is actually how Microsoft recommends you debug your services, but I think it’s just a hell of a lot of messing about.

What’s Next

There’s actually some other really helpful things we could do here. Like creating an install.bat file that runs our SC Create  command for us, but in my opinion, the debugging issues we’ve seen above are enough for me to immediately tap out. Luckily there is a helpful library called Topshelf that takes a tonne of the pain away, and in the next part of this series we will be looking at how we can get going with that.

I recently came across a project that was doing something simple – reading a JSON config file. But the way in which it parsed the JSON was perplexing to me. It seemed like the most convoluted way of reading JSON data I had ever seen. In an attempt to show the other developer the light, I quickly googled for an existing tutorial on how parsing JSON data was actually really easy with JSON.NET. To my amazement, many of the top results were doing the same crazy things this developer was doing.

So this is the tutorial I wish I could have sent on. This is how you parse JSON in .NET Core.

Installing JSON.NET

If you are working on a ASP.NET Core project before .NET Core 3.0, you likely have JSON.NET installed already. But if you are working on a Console Application or a Class Library, you will need to run the following from your package manager console to install it.

Our Example JSON Document

For the purposes of this tutorial, we want to have a really simple example JSON document to work with. That document will look like so :

I’ve saved this to a file, and all I’m doing is loading it into a string in a console application :

Next we will take this string, and actually look at how we extract data from it.

Using JObject

This in my opinion is the very worst way to parse a large JSON document, but seems to be the immediate answer in any stackoverflow question surrounding JSON and C#. To me it relies on magic strings, is brittle as hell, and in general is overly complex.

Let’s look at how we might get a simple string property using a JObject :

Eh, it’s alright, but relies on us passing in a magic string to find the property. It’s not terrible, but it’s not great either.

What about if we want to a subdocument? Well at first thought you may think you can use the well known XPath query that is typically used to parse XML files. You would be wrong. For reasons, there is a different query language called JSONPath that instead uses dot notation. In some respects it makes sense because JSON doesn’t have all the features XML has (attributes, namespaces etc), but it’s also kinda annoying to have to learn another set of notation.

Anyway, back to our sub document query  :

If we want to grab our entire list object :

So far, we’ve just been grabbing individual values. What if we want the entire document to be mapped? Forget about it. Mapping each item individually to a larger class would be torture (And yet I see people do this all the time).

The only reason to use JObject is if you want an individual item/property and don’t care about the majority of the document.

Using Dynamic JToken

Another dud way of parsing JSON is using JToken to cast the entire JSON document to a dynamic object. For example :

This works and is pretty easy to get going, but suffers from some major flaws. The biggest being that the casing of your JSON document now becomes incredibly important to how your dynamic object looks. For example :

Given that JSON is typically snake or camel case, and properties in C# are pascal case, someone is going to have to give in and change their casing style.

This is also ignoring the fact that you are now locked into using dynamic objects which come with their own set of headaches around dynamic typing.

In my opinion, similar to JObject, it’s not worth using this method to parse entire JSON documents. Maybe if you need access to a single property and you prefer using a C# dynamic object over using JSONPath, otherwise forget about it.

Using JsonConvert

This, in my opinion, is the only way to convert JSON to a C# Object. And it’s dead easy!

First we create a C# class that matches what we have in JSON :

Then with a single line, we can convert a JSON string into a strongly typed object :

No fiddling around with paths, no worrying about type conversion, it just works. When it comes to accessing things like our list, because JSON.NET deserializes it straight to a list object, there is no worrying about casting or anything like that. For example, this works straight out of the box :

Summary

9 times out of 10, you are going to want to use the JsonConvert option. It deserializes the entire JSON doc to a class which for the most part is going to be what you are doing for. The only time you should really be using JObject is when you want a very small subset of the document (Preferably just one or two items) – otherwise the leg work in mapping individual properties is just crazy.

I’ve been using Dapper as my data access ORM of choice these days just because I find it far easier than working out the right combination of Fluent Mappings that will actually get me what I want – so unbeknown to me, EF Core didn’t have Lazy Loading functionality up until version 2.1. And even now, it’s less intuitive than what you might think.

This is less of a tutorial and more of a “if you are banging your head against a wall, here’s how to fix that”. You should already be up and running with Entity Framework with all your migrations going smoothly etc. This is just a “Here’s how do to Lazy Loading in Entity Framework Core if you need it” type deal.

First, EF Core by itself doesn’t have lazy loading functionality out of the box. You are first required to install another package. So in your project, you will need to run the following in your package manager console :

I would highly recommend that you install the exact same version of the Proxies package as the EntityFramework version you are using. Otherwise when you install the Proxies project, it’s going to upgrade your EF version at the same time which in the majority of cases you want to avoid.

Next, depending on where you are configuring your DB Context. If it’s in your startup.cs file, then you’re just going to want to make a call to UseLazyLoadingProxies() like so :

If you are configuring everything in your actual context, then you can basically do the same thing with the options builder :

Now when you run your project, 9 times out of 10 you will get :

InvalidOperationException: Navigation property ‘NavProperty’ on entity type ‘Entity’ is not virtual. UseLazyLoadingProxies requires all entity types to be public […]

This is because all navigation properties must be marked as virtual, so for example I would have to change my class to be :

The virtual keyword allows EF to override the property and fetch the data when called. This is slightly different from Entity Framework in full .NET. From memory you could not mark things as virtual if you wanted to have a navigation property only when using the “Include” with your LINQ. But here’s it’s basically enforced.

If you turn on Lazy Loading, all navigation properties (e.g. Non Primitives), are required to be virtual. 

For me it somewhat makes sense. And if you are worried about people using a navigation property that will load an incredible amount of data, then it basically boils down to a few things :

  • Don’t use Lazy Loading at all
  • Remove navigation properties on joins that could cause a large data load
  • Write your repository layer in a way that encourages the use of Include statements/Eager loading wherever possible, and Lazy Loading is the exception to the rule.

Microsoft also have a way to Lazy Load without the use of the proxies package. But it looks pretty over the top and you are likely better off just eager loading instead.

Let me start off by saying, I am not a VB.NET developer by any means. Infact I would go as far to say that I never really learned Visual Basic and I only delve into it when I really have to.

As I write this, it’s one of those times when I have to. I’m supporting a few legacy platforms written in VB.NET at a large enterprise company. And as the world moves to .NET Core, I was looking to an upgrade path where we can bring along the existing legacy apps without re-writing the lot. If you are currently writing VB.NET, you should seriously think about moving to C#. It’s not a slight on VB.NET as a language, but more reading between the lines as to how Microsoft has treated it in recent years.

Web Development

Let’s get the elephant in the room out of the way immediately. There is zero support for any web development in VB.NET on the .NET Core platform. That includes MVC, Web API, Razor Pages and WebForms (Although WebForms aren’t getting the .NET Core treatment ever – MS have instead been pushing Razor Pages as an alternative).

This essentially means if you want to use .NET Core as your web platform of choice, you will be required to learn C#.

Class Libraries

VB.NET is supported for both .NET Core and .NET Standard class libraries. This means that if your web application has different layers such as a service/data access layer, then you will only be required to re-write the web front end in C# and you can keep VB.NET as your class library language.

Desktop Development

Strangely, WPF, Winforms and Console applications are all able to be written in VB.NET with .NET Core 3+. I say strangely because to go to the effort of porting desktop GUI applications to .NET Core that then supports VB.NET but then not do the same for web could be seen as an odd move. But at a guess, I suspect the numbers of VB.NET Web applications that are not using ASP.NET WebForms to actually be a low number – and since that isn’t getting ported even for C#, I can see why they just didn’t bother. However things like WinForm development has often been written using VB.NET since release, so it’s coming along for the ride (for now!).

Other Key Notes

  • C# and VB.NET Interop is still high on Microsoft’s priority list, that means if fundamental changes to the C# language (Default class implementations) are added, this will be added to VB.NET also.
  • Microsoft have expressly said that not all .NET Core API’s will be available for VB.NET, especially those that give low level control to programmers.
  • Visual Basic 16 will ship together with C# 8
  • Microsoft’s meeting notes on the future of the VB language are up on Github here : https://github.com/dotnet/vblang

This article is a series on using Dapper in .NET Core. Feel free to jump around, but we highly recommend you start in Part 1 and work your way through!

Part 1 – The What/Why/Who
Part 2 – Dapper Query Basics
Part 3 – Updates/Inserts/Deletes
Part 4 – Dapper Contrib


What Is Dapper Contrib?

Dapper.Contrib is a “helper” library built on top of Dapper, that adds a couple of “quality of life” enhancements to otherwise annoying Dapper hurdles. Most notably, updates and inserts become much simpler with Dapper.Contrib. So let’s jump right in!

Installing Dapper.Contrib

From your Package Manager console window you can install the Dapper.Contrib package.

If you haven’t already installed Dapper, this will drag Dapper in as a dependency too! Note that at times you do have to do a juggle with versions of Dapper if you install it first, then Contrib second. The easiest way if you really want to use Contrib is remove the direct reference to the Dapper nuget, and simply install Contrib and let it drag in the version it needs.

Model Our C# Class For Writes In Dapper.Contrib

The important thing when using Dapper.Contrib is that you need a C# model to match that of your database model. Again I can’t stress enough that this should really only be used in the case of writes. When reading data, the entire point of using Dapper is that you don’t over-select and so you should attempt to use applicable DTO’s whenever possible.

From earlier, we have our C# model ready to go :

This works as is, but there are a couple of things to note.

Firstly, Dapper.Contrib requires an “Id” to be able to update a record. In our case, we have a property called “Id” and Dapper.Contrib uses this convention and knows it’s our primary key on our table. But what if we use something like “EventId” as our primary key? That’s easy too.

We just add the “Key” attribute from our Dapper.Contrib library to tell it that’s our primary key.

Next, we are also required that the class name be the same as the SQL Table name, but as a plural. So in our case, our class name is Event and it expects that our database table be called “Events”.  So how can we say nope, we want to stick with the singular?

Again, another attribute. There is actually a couple more attributes that do come in handy, but these are the main ones you should know right off the bat.

Inserting Records Using Dapper.Contrib

This should hopefully make things easier.

… Pretty awesome right! So we are back to having this nice “Insert” method that takes our object and creates an insert statement. It takes some of that perfect control away from us, but when it comes to inserts there isn’t much that we would want to be careful of when it comes to generating the insert statement so I’m happy with it.

Get A Record By Id Using Dapper.Contrib

Contrib also has this nifty feature where it can get a full record by Id. I feel like this sort of strays a little from the intention of Dapper because you should be trying to avoid doing a “SELECT *” and instead only get what you need. But we’ll use it later so I wanted to show it off.

Because it knows we are requesting a type of “Event”, it knows what table that’s from and grabs us the entire object based on it’s primary key Id.

Updating Records Using Dapper.Contrib

Contrib can also do updates. Although they are a little less slick.

First off, we can obviously get our object by ID, update it, then write it like so :

So Contrib has given us that handy “Update” method. But the problem is when we check the actual SQL that got run :

So it basically updated all fields even if they weren’t touched. Because of this we have to get the record by Id before updating because we need to make sure all other fields (Not just the one we want to update) are set correctly.

Now Dapper.Contrib does have dirty tracking, but IMO that’s just as pointless as you still need to get the full object by ID in the first place to do entity tracking, and the entire point is that we aren’t overselecting.

In my opinion, Updates are still a good candidate for doing custom SQL instead of using Contrib, but that’s just my personal feeling!

Delete Records Using Dapper.Contrib

Deleting is actually fairly simple. As long as Dapper.Contrib knows what your primary key is, it can delete records based on that.

To be honest, this syntax annoys me somewhat as we are creating an object for no reason! Wouldn’t it just be easier to pass in a simple Int parameter?

What’s Next?

That’s actually it! We could get into the nitty gritty of Dapper, or even explore some of the other Dapper extension libraries such as Dapper.Rainbow or DapperExtensions, but they are largerly the same as Dapper.Contrib with minor differences. They are definitely worth investigating however!

Feel free to drop a comment below on your thoughts on Dapper and what project’s you’ve used it on in the past, I’d love to hear them!

This article is a series on using Dapper in .NET Core. Feel free to jump around, but we highly recommend you start in Part 1 and work your way through!

Part 1 – The What/Why/Who
Part 2 – Dapper Query Basics
Part 3 – Updates/Inserts/Deletes
Part 4 – Dapper Contrib


Updating A Record

So.. Here’s where things with Dapper get kinda rough. You see, to update a record we use the “Execute” method from Dapper like so :

And of course you can use parameters with the execute statement also. So we could change that to look like :

I mean yeah, it works but… It’s kinda rough right? Imagine if we wanted to update multiple fields at once how unweildy this statement might get. But then again, maybe that’s good. We are only updating exactly what we want to update and nothing more.

It’s sort of a “Great power comes great responsibility” type issue.

Inserting A Record

If you thought updating a record was rough, wait till you see inserting.

Again… It’s pretty basic. Infact all dapper is really doing is executing our insert statement, but we are kinda doing all the leg work here. Similar to updating, we can use parameters here to clean the values up a bit, but it’s still a lot of manual work. Obviously each time we create a new column, we need to come back to this statement and make sure it works as we intend it to.

Delete A Record

No big surprises here.

The execute method kinda becomes our catch all for “Execute this statement and don’t worry about the result”.

How About CQRS?

It’s pretty common for CQRS to be a hot topic when talking about Dapper. After all, we’ve kind of seen that while reading data in Dapper is super powerful, slick, and easy. Writing feels cumbersome. For this reason I’ve found myself using doing writes using a different ORM (EF or NHibernate), but sticking with Dapper for querying. It works, but it’s not required (As we will soon see!).

What’s Next?

So updating and inserting records is a bit oof. But there is an extension library for Dapper called Dapper.Contrib that actually makes this super simple. Let’s take a look at that!

This article is a series on using Dapper in .NET Core. Feel free to jump around, but we highly recommend you start in Part 1 and work your way through!

Part 1 – The What/Why/Who
Part 2 – Dapper Query Basics
Part 3 – Updates/Inserts/Deletes
Part 4 – Dapper Contrib


Installing Dapper

Dapper is a nuget package that does nothing more than “augment” the existing SQLConnection object in .NET Core….. Reading that line back I’m not sure it makes sense but hopefully it will at the end. In simple terms you can install the Dapper nuget package by running the following from your Package Manager Console.

In the following examples I’m just running them from a simple console application, but you can install Dapper into any type of .NET Full Framework/.NET Core application.

Follow Along Data

For much of this tutorial, I’m going to be using some data that I created for a simple “Event” database. There really isn’t much to it, but if you want to follow along at home you can grab the script to create the database/tables/data from this Github Gist here : https://gist.github.com/mindingdata/00c2d608c4a0b4ea22e76e95c1d1417e

It’s not required and for the most part everything we do here will be super simple. But if you want to write the queries in your own console application as we go, that’s cool too!

Creating A Database Connection

As you might have seen earlier, when using Dapper we have to create our own SQL Connection to the database. Dapper doesn’t do this for us. But.. It’s simple enough. For example in a console application that I’ve already installed Dapper from Nuget in, all it takes is :

That’s it! We now have a connection to our database that is “Dapper” ready. You’ll also notice that at this point, the only “Using” statement we would actually be using is using System.Data.SqlClient; . That’s because the SqlConnection object is the regular one from the .NET library, so far we haven’t even used Dapper yet!

Query A Single Row

Let’s add some magic and actually get the above SQL Connection actually doing something. In my example database, I have a table called “Event” that has a list of events in it. I want to know specifically what the name of the event is with the Id of 1.

… That’s it. We can of course write our initial SQL in something like SQL Management Studio, and then copy it across. We have complete control over the query and it’s going to run *exactly* what we told it to.

The actual magic part of this is the “QueryFirst” method on our connection. If we jump into the source code of Dapper we can see this is just an extention method on the SqlConnection object.

That’s part of the beauty of Dapper. It’s using really simple concepts that we already know like SqlConnection, and extending them just a little bit more to be useful.

Query A Table Into An Object

So far we are just returning a single property, a string. That’s not that amazing really. But what if we wanted the full Event object instead. Well… Dapper handles that out of the box too.

First we create a class that maps to our SQL Table :

Then we modify our statement a bit to instead be a SELECT * and instead use the return type of Event instead of string. So it looks like so :

If you’ve ever used raw ADO statements before in .NET, you’ll know the pain of manually mapping each property from the database to our C# class. But Dapper does that all for us. It takes the results from the statement, and checks to see if there is a property with the same name as a column, if there is, great map it! If not, don’t worry, just throw it away for now.

Query Statement Into A DTO

So the above example maps an entire database table to a C# class, but the annoying part of that is maybe we don’t need everything. Infact a very common SQL performance issue is overselecting. In our above example we actually only use the Id and EventName fields, so let’s first create a DTO with only those fields.

Then we can modify our query a bit to only select the columns we need, and make Dapper map it to our DTO :

Too easy!

Using Parameterized Queries

We haven’t really been taking user input at this point. But let’s pretend instead of always getting the Event with the ID of 1, we allowed the user to specify the Id. How can we pass that to our Dapper query while still following best practice (aka parameterizing the query).

Well again, Dapper takes care of this for us :

Pretend that eventId actually came from a GET param or any other user input. All we do is substitute an @VariableName in our SQL statement, then we add a new parameter of an anonymous object where the “Key” is the same as the @VariableName and the value is what you want it to be.

But why couldn’t I just change the line to the following?

Both look like they do substitutions, but the first actually uses proper parameters when executing the SQL statement. If you use a SQL Profiler, you will see the following for the first query :

Notice how the parameter is specified after the query, and yet for the second example where we just use string substitution :

Uh Oh, no parameters here. This opens us up for SQL Injection a plenty. But again, Dapper handles this for us with proper parameterization!

Query Multiple Rows

Up until now, we’ve been querying just a single row at a time. If we want more than one, that’s just as easy :

We just change our “QueryFirst” to a “Query” and away we go! We are instead returned a list of mapped objects which we can cycle through. Pretty easy stuff!

What’s Next

With the basics of querying using Dapper out of the way, obviously reading and writing is up next! You can check that out here!