Over the past couple of weeks, I’ve been covering how to use Playwright to create E2E tests using C# .NET. Thus far, I’ve covered how to write tests in a purely C# .NET testing way, that is, without BDD or any particular cucumber like syntax. It more or less means that the person writing the tests needs to know a fair bit of C# before they can write tests without getting themselves into trouble. It works, but it’s not great.

Next in the Playwright series, I was going to cover how you can use Specflow with Playwright to make it dead simple for your QA team to write E2E tests, without any knowledge of C# at all. But half way through writing, I realized that many of the topics covered weren’t about Playwright at all, it was more about my personal methodology on how I structure tests within Specflow to better enable non technical people to get involved.

Now I’m just going to say off the bat, that I *know* that people will hate the way I structure my tests. It actually runs counter to Specflow’s own documentation on how to build out a test suite. But reality often does not match the perfect world that the docs portray. And so this writeup is purely if you are a developer getting your manual testers into the groove of writing automated tests.

What Is Specflow?

Specflow is a testing framework that transforms your tests to use “Behavior Driven Development” (BDD) type language to build out your test suite. In simple terms, it takes the language of


And turns them into automated tests.

That’s the holistic view. The more C# .NET centric view is that it’s a Visual Studio addon, that maps BDD type language to individual methods. For example, if I have a have a BDD line that says this :

Given I am on URL 'https://dotnetcoretutorials.com'

It will be “mapped” to a C# method such as

[Given("Given I am on '(.*)'")]
public void GivenIAmOnUrl(string url)

The beauty of this type of development is that should another test require a similar type of navigation to a URL, you simply write the same BDD line and it will map to the same method automatically under the hood.

In addition, Specflow comes with other pieces of the testing framework such as a lightweight DI, a test runner, assertions library etc.

The thing about Specflow however, is that it in of itself does not automate the browser at all. It actually passes that off to things like Selenium or Playwright, you can even use your favorite assertion library like NUnit, MSTest or XUnit. Specflow should be seen as a lightweight testing framework that enforces BDD like test writing, but what goes under the hood is still largely up to you.

I don’t want to dig too deep into getting Specflow up and running because the documentation is actually fairly good and it’s more or less just installing a Visual Studio Extension. So download it, give it a crack, and then come back here when you are ready to continue.

The Page Object Model

Specflow (And more specifically Selenium) typically suggest a “Page Object Model” pattern to encapsulate all logic for a given page. This promotes reusability and means that all logic, selectors, and page behaviour is located in a single place. If across tests you are visiting the same page and clicking the same button, it does make sense to encapsulate that logic somewhere.

Let’s imagine that I’m trying to write a test that goes to Google and types in a search value, then clicks the search button. The “Page Object Model” would look something like this :

public class GooglePageObject
    private const string GoogleUrl = "https://google.com";

    //The Selenium web driver to automate the browser
    private readonly IWebDriver _webDriver;

    public GooglePageObject(IWebDriver webDriver)
        _webDriver = webDriver;

    //Finding elements by ID
    private IWebElement SearchBoxElement => _webDriver.FindElement(By.Xpath("//input[@title='Search']"));
    private IWebElement SearchButtonElement => _webDriver.FindElement(By.Xpath("(//input[@value='Google Search'])[2]"));

    public void EnterSearchValue(string text)
        //Clear text box
        //Enter text

    public void PressSearchButton()

This encapsulates all logic for how we interact with the Google Search page in this one class. To follow on, the Specflow steps might look like so in C# code :

[Given("the search value is (.*)")]
public void GivenTheSearchValueIs(string text)

[When("the search button is clicked")]
public void WhenTheSearchButtonIsClicked()

Simple enough! And our Specflow steps would obviously read :

Given the search value is ABC
When the search button is clicked

Now this obviously works, but here’s the problem I have with it.

The entire test has more or less been written by a developer (Or an automated QA specialist). There is no way that the encapsulation of this page could be written in C# by someone who doesn’t themselves know C# a decent amount. Additionally, while not pictured, there is an entire dependency injection flow to actually inject the page object model into our tests. Can you imagine explaining dependency injection to someone who has never written a line of code in their lives?

Furthermore, let’s say that on this google search page, we wish to click the “I’m feeling lucky” button in a future test

The addition of this button being able to be used in tests requires someone to write the C# code to support it. Of course, once it’s done, it’s done and can be re-used across tests but again.. I find it isn’t so much testers writing their own automated tests as much as developers doing it for them, and a QA slapping some BDD on the top at the very end.

Creating Re-usable BDD Steps

And this is where we get maybe a little bit controversial. The way I format tests does away with the Page Object Model pattern, in fact, it almost does away with some parts of BDD all together. If I was to write these tests, here’s how I would create the steps :

[Given("I type '(.*)' into a textbox with xpath '(.*)'")]
public void WhenITypeIntoATextboxWithXpath(string text, string xpath)

[When("I click the button with xpath '(.*)'")]
public void WhenIClickTheButtonWithXpath(string xpath)

And the BDD would look like :

Given I type 'ABC' into a textbox with xpath '//input[@title="Search"]'
When I click the button with xpath '(//input[@value="Google Search"])[2]'

I can even create more simplified steps based off just element IDs:

[Given("I type '(.*)' into a textbox with Id '(.*)'")]
public void WhenITypeIntoATextboxWithId(string text, string id)

By creating steps like this, I actually have to write very minimal C# code. I’ve even created steps that are “When an element ‘div’ has a property ‘class’ of value ‘myClass'”. Now instead of having to front load a tonne of C# training to my manual QA, I instead teach them about XPath. I give a nice 1 hour lesson on using Chrome Dev Tools to find elements, show them how to test whether their XPath will work correctly, and away we go.

Typically I can spend a day creating a bunch of re-usable steps, and then testers only ever have to worry about writing BDD style Given/When/Then that uses existing selectors.

The Good, The Bad, The Ugly

When it comes to looking at the good of this test writing strategy, it’s easy to see that enabling testers to focus on writing BDD and actually coming up with the test scenarios allows immature or non technical QA teams to start writing tests from Day 1.

It does come at a cost however. BDD is designed that the same test scenarios a tester would run manually, can essentially be cloned to be automated. However a human would never write a test that specifically calls out XPath in the test steps, nor would they realistically be able to execute a test manually that had XPath littered throughout the test description.

For me, getting testers into the mode of writing automated tests without the overhead of learning C# far outweigh any loss of readability, and that’s why I continue to create test suites that follow this pattern.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a post in a series about the automated E2E testing framework Playwright. While you can start anywhere, it’s always best to start right at the beginning!

Part 1 – Intro
Part 2 – Trace Viewer

A massive selling point in using a automated test runner such as Cypress.io, is that it can record videos and take screenshots of all of your test runs right out of the box. It comes with a pretty nifty viewer too that allows you to group tests by failures, and then view the actual video of the test being executed.

If we compare that to Selenium, I mean.. Selenium simply does not have that feature. It’s not to say that it can’t be done, you just don’t get it out of the box. In most cases, I’ve had automation testers simply take a screenshot on test failure and that’s it. Often the final screenshot of a failed step is enough to debug what went wrong, but not always. Additionally, there is no inbuilt tool for “viewing” these screenshots, and while MS Paint is enough to open a simple image file, it can get confusing managing a bunch of screenshots in your downloads folder!

Playwright is somewhere in the middle of these. While it doesn’t record actual videos, it can take screenshots of each step along the way taking before and after shots, and it provides a great viewer to pinpoint exactly went wrong. While it works out of the box, there is a tiny bit of configuration required.

I’m going to use our example from our previous post which uses MSTest, and add to it a little. However, the steps are largely the same if you are using another testing framework or no framework at all.

The full “traceable” MSTest code looks like so :

public class MyUnitTests : PageTest
    public async Task TestSetup()
        await Context.Tracing.StartAsync(new TracingStartOptions
            Title = TestContext.TestName, //Note this is for MSTest only. 
            Screenshots = true,
            Snapshots = true,
            Sources = true

    public async Task TestCleanup()
        await Context.Tracing.StopAsync(new TracingStopOptions
            Path = TestContext.TestName + ".zip"

    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()
        //Our Test Code here. Removed for brevity. 

Quite simply :

  • Our TestInitialize method kicks off the tracing for us. The “TestContext” object is a MSTest specific class that can tell us which test is under execution, you can swap this out for a similar class in your test framework or just put any old string in there.
  • Our TestCleanup essentially ends the trace, storing the results in a .zip file.

And that’s it!

In our bin folder, there will now be a zip file for each of our tests. Should one fail, we can go in here and retrieve the zip. Unlike Cypress, there isn’t an all encompassing viewer where we can group tests, their results and videos. This is because Playwright for .NET is relying a bit on both MSTest and Visual Studio to be test runners, and so there is a bit of a break in tooling when you then want to view traces, but it’s not that much leg work.

Let’s say our test broke, and we have retrieved the zip. What do we do with it? While you can download a trace viewer locally, I prefer to use Playwright’s hosted version right here https://trace.playwright.dev/

We simply drag and drop our zip file and tada!

I know that’s a lot to take in, so let’s walk through it bit by bit!

Along the top of the page, we have the timeline. This tells us over time how our test ran and the screenshots for each time period. The color coding tells us when actions changed/occurred so we can immediately jump to a certain point in our test.

To the left of the screen, we have our executed steps, we can click on these to immediately jump in the timeline.

In the centre of the page we have our screenshot. But importantly we can switch tabs for a “Before” and “After” view. This is insanely handy when inputting text or filling out other page inputs. Imagine that the test is trying to fill out a very large form, and on the very first step it doesn’t fill in a textbox correctly. The test may not fail until the form is submitted and validation occurs, but in the screenshot of the failure, we may not even be able to see the textbox itself. So this gives us a step by step view of every step occurring as it happens.


To the right of the screen you’ve got a bunch of debug information including Playwright’s debug output, the console output of the browser, the network output of the browser (Similar to Chrome/Firefox dev tools), but importantly, you’ve also got a snapshot of your own code and which step is running. For instance, here I am looking at the step to fill a textbox.

This is *insanely* helpful. If a test fails, we can essentially know exactly where in our own code it was up to and what it was trying to do, without thinking “Well, it was trying to type text into a textbox, let me try and find where that happens in my code”.

And that’s the Playwright Test Trace Viewer. Is it as good as Cypress’ offering? Probably not quite yet. I would love to see some ability to capture a single zip for an entire test run, not per test case (And if I’ve missed that, please let me know!), but for debugging a single test failure, I think the trace viewer is crazy powerful and yet another reason to give Playwright a try if you’re currently dabbling with Selenium.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a post in a series about the automated E2E testing framework Playwright. While you can start anywhere, it’s always best to start right at the beginning!

Part 1 – Intro
Part 2 – Trace Viewer

These days, end to end browser testing is a pretty standard practice amongst modern development teams. Whether that’s with Selenium, WebDriver.IO or Cypress, realistically as long as you are getting the tests up and running, you’re doing well.

Over the past couple of years, Cypress had become a defacto end to end testing framework. I don’t think in the last 5 years I’ve been at a company that hasn’t atleast given it a try and built out some proof of concepts. And look, I like Cypress, but after some time I started getting irritated with a few caveats (Many of which are listed by Cypress themselves here).

Notably :

  • The “same origin” URL limitation (Essentially you must be on the same root domain for the entire test) is infuriating when many web applications run some form of SSO/OAuth, even if using something like Auth0 or Azure AD B2C. So you’re almost dead in the water immediately.
  • Cypress does not handle multiple tabs
  • Cypress cannot run multiple browsers at the same time (So testing some form of two way communication between two browsers is impossible)
  • The “Promise” model and chaining of steps in a test seemed ludicrously unwieldy. And when trying to get more junior team members to write tests, things quickly entered into a “Pyramid of doom“.

As I’ll talk about later in another post, the biggest thing was that we wanted a simple model for users writing tests in Gherkin type BDD language. We just weren’t getting that with Cypress and while I’m sure people will tell me all the great things Cypress can do, I went out looking for an alternative.

I came across Playwright, a cross platform, cross browser automation testing tool that did exactly what it says on the tin with no extras. Given my list of issues above with Cypress, I did have to laugh that this is a very prominent quote on their homepage :

Multiple everything. Test scenarios that span multiple tabs, multiple origins and multiple users. Create scenarios with different contexts for different users and run them against your server, all in one test.

They definitely know which audience they are playing up to.

Playwright has support for tests written in NodeJS, Java, Python and of course, C# .NET. So let’s take a look at the latter and how much work it takes to get up and running.

For an example app, let’s assume that we are going to write a test that has the following test scenario :

Given I am on https://www.google.com
When I type dotnetcoretutorials.com into the search box
And I press the button with the text "Google Search"
Then the first result is domain dotnetcoretutorials.com

Obviously this is a terrible example of a test as the result might not always be the same! But I wanted to just show a little bit of a simple test to get things going.

Let’s get cracking on a C# test to execute this!

Now the thing with Playwright is, it’s actually just a C# library. There isn’t some magical tooling that you have to download or extensions to Visual Studio that you need to get everything working nicely. You can write everything as if you were writing a simple C# unit test.

For this example, let’s just create a simple MSTest project in Visual Studio. You can of course create a test project with NUnit, XUnit or any other testing framework you want and it’s all going to work much the same.

Next, let’s add the PlayWright nuget package with the following command in our Package Manager Console. Because we are using MSTest, let’s add the MSTest specific Nuget package as this has a few helpers that speed things up in the future (Realistically, you don’t actually need this and can install Microsoft.Playwright if you wish)

Install-Package Microsoft.Playwright.MSTest

Now here’s my test. I’m going to dump it all here and then walk through a little bit on how it works.

public class MyUnitTests : PageTest
    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()

        //Given I am on https://www.google.com
        await Page.GotoAsync("https://www.google.com");

        //When I type dotnetcoretutorials.com into the search box
        await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");

        //And I press the button with the text "Google Search"
        await Page.ClickAsync("[value='Google Search'] >> nth=1");

        //Then the first result is domain dotnetcoretutorials.com
        var firstResult = await Page.Locator("//cite >> nth=0").InnerTextAsync();
        Assert.AreEqual("https://dotnetcoretutorials.com", firstResult);

Here’s some things you may notice!

First, our unit test class inherits from “PageTest” like so :

public class MyUnitTests : PageTest

Why? Well because the PlayWright.MSTest package contains code to set up and tear down browser objects for us (And it also handles concurrent tests very nicely). If we didn’t use this package, either because we are using a different test framework or we want more control, the set up code would look something like :

IPage Page;

public async Task TestInitialize()
    var playwright = await Playwright.CreateAsync();
    var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
        Headless = false
    Page = await browser.NewPageAsync();

So it’s not the end of the world, but it’s nice that the framework can handle it for us!

Next what you’ll notice is that there are no timeouts *and* all methods are async. By timeouts, what I mean is the bane of every selenium developers existence is “waiting” for things to show up on screen, especially in javascript heavy web apps.

For example, take these two calls one after the other :

//Given I am on https://www.google.com
await Page.GotoAsync("https://www.google.com");

//When I type dotnetcoretutorials.com into the search box
await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");

In other frameworks we might have to :

  • Add some sort of arbitrary delay after the GoTo call to wait for the page to properly load
  • Write some code to check if a particular element is on screen before continuing (Like a WaitUntil type call)
  • Write some custom code for our Fill method that will poll or retry until we can find that element and type

Instead, Playwright handles that all under the hood for you and assumes that when you want to fill a textbox, that eventually it’s going to show and so it will wait till it does. The fact that everything is async also means it’s non-blocking, which is great if you are using playwright locally since it’s not gonna freeze everything on your screen for seconds at a time!

The rest of the test should be pretty self explanatory, we are using some typical selectors to fill out the google search and find the top result, and our Assert is from our own framework. Playwright does come packaged with it’s own assertion framework, but you don’t have to use it if you don’t want to!

And.. That’s it!

There are some extremely nifty tools that come packaged with Playwright that I’m going to write about in the coming days, including the ability to wire up with Specflow for some BDD goodness. What I will say so far is that I like the fact that Playwright has hit the right balance between being an automation test framework *and* being able to do plain browser automation (For example to take a screenshot of a web page). Cypress clearly leans on the testing side, and Selenium I feel often doesn’t feel like a testing framework as much as it feels like a scripting framework that you can jam into your tests. So far, so good!

Next up, I wanted to take a look at the Playwright inbuilt “Trace Viewer”, check out that post here : https://dotnetcoretutorials.com/2022/05/24/using-playwright-e2e-tests-with-c-net-part-2-trace-viewer/

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Visual Studio 2022 17.2 shipped the other day, and in it was a handy little feature that I can definitely see myself using a lot going forward. That is the IEnumerable Visualizer! But before I dig into what it does (And really it’s quite simple), I wanted to quickly talk about why it was so desperately needed.

Let’s imagine I have a class like so :

class Person
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

And somewhere in my code, I have a List of people with a breakpoint on it. Essentially, I want to quickly check the contents of this list and make sure while debugging that I have the right data in the right place.

Our first port of call might be to simply is do the “Hover Results View” method like so :

But… As we can see it doesn’t exactly help us to view the contents easily. We can either then go and open up each item individually, or in some cases we can override the ToString method. Neither of which may be preferable or even possible depending on our situation.

We can of course use the “Immediate Window” to run queries against our list if we know we need to find something in particular. Something like :

? people.Where(x => x.FirstName == "John")

Again, it’s very adhoc and doesn’t give us a great view of the data itself, just whether something exists or not.

Next, we can use the Autos/Watch/Locals menu which does have some nice pinning features now, but again, is a tree view and so it’s hard to scroll through large pieces of data easily. Especially if we are trying to compare multiple properties at once.

But now (Again, you require Visual Studio 2022 17.2), notice how in the Autos view we have a little icon called “View” right at the top of the list there. Click that and…

This is the new IEnumerable visualizer! A nice tabular view of the data, that you can even export to excel if you really need to. While it’s a simple addition and really barebones, it’s something that will see immediate use in being able to debug your collections more accurately.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

A number of times in recent years, I’ve had the chance to work in companies that completely design out entire API’s using OpenAPI, before writing a single line of code. Essentially writing YAML to say which endpoints will be available, and what each API should accept and return.

There’s pros and cons to doing this of course. A big pro is that by putting in upfront time to really thinking about API structure, we can often uncover issues well before we get half way through a build. But a con is that after spending a bunch of time defining things like models and endpoints in YAML, we then need to spend days doing nothing but creating C# classes as clones of their YAML counterparts which can be tiresome and frankly, demoralizing at times.

That’s when I came across Open API Generator : https://openapi-generator.tech/

It’s a tool to take your API definitions, and scaffold out APIs and Clients without you having to lift a finger. It’s surprisingly configurable, but at the same time it isn’t too opinionated and allows you to do just the basics of turning your definition into controllers and models, and nothing more.

Let’s take a look at a few examples!

Installing Open API Generator

If you read the documentation here https://github.com/OpenAPITools/openapi-generator, it would look like installing is a huge ordeal of XML files, Maven and JAR files. But for me, using NPM seemed to be simple enough. Assuming you have NPM installed already (Which you should!), then you can simply run :

npm install @openapitools/openapi-generator-cli -g

And that’s it! Now from a command line you can run things like :

openapi-generator-cli version

Scaffolding An API

For this example, I actually took the PetStore API available here : https://editor.swagger.io/

It’s just a simple YAML definition that has CRUD operations on an example API for a pet store. I took this YAML and stored it as “petstore.yaml” locally. Then I ran the following command in the same folder  :

openapi-generator-cli generate -i petstore.yaml -g aspnetcore -o PetStore.Web --package-name PetStore.Web

Pretty self explanatory but one thing I do want to point out is the -g flag. I’m passing in aspnetcore here but in reality, Open API Generator has support to generate API’s for things like PHP, Ruby, Python etc. It’s not C# specific at all!

Our project is generated and overall, it looks just like any other API you would build in .NET

Notice that for each group of API’s in our definition, it’s generated a controller each and models as well.

The controllers themselves are well decorated, but are otherwise empty. For example here is the AddPet method :

/// <summary>
/// Add a new pet to the store
/// </summary>
/// <param name="body">Pet object that needs to be added to the store</param>
/// <response code="405">Invalid input</response>
[Consumes("application/json", "application/xml")]
public virtual IActionResult AddPet([FromBody]Pet body)
    //TODO: Uncomment the next line to return response 405 or use other options such as return this.NotFound(), return this.BadRequest(..), ...
    // return StatusCode(405);

    throw new NotImplementedException();

I would note that this is obviously rather verbose (With the comments, Consumes attribute etc), but a lot of that is because that’s what we decorated our OpenAPI definition with, therefore it tries to generate a controller that should act and function identically.

But also notice that it hasn’t generated a service or data layer. It’s just the controller and the very basics of how data gets in and out of the API. It means you can basically scaffold things and away you go.

The models themselves also get generated, but they can be rather verbose. For example, each model gets an override of the ToString method that looks a bit like so :

/// <summary>
/// Returns the string presentation of the object
/// </summary>
/// <returns>String presentation of the object</returns>
public override string ToString()
    var sb = new StringBuilder();
    sb.Append("class Pet {\n");
    sb.Append("  Id: ").Append(Id).Append("\n");
    sb.Append("  Category: ").Append(Category).Append("\n");
    sb.Append("  Name: ").Append(Name).Append("\n");
    sb.Append("  PhotoUrls: ").Append(PhotoUrls).Append("\n");
    sb.Append("  Tags: ").Append(Tags).Append("\n");
    sb.Append("  Status: ").Append(Status).Append("\n");
    return sb.ToString();

It’s probably overkill, but you can always delete it if you don’t like it.

Obviously there isn’t much more to say about the process. One command and you’ve got yourself a great starting point for an API. I would like to say that you should definitely dig into the docs for the generator as there is actually a tonne of flags to use that likely solve a lot of hangups you might have about what it generates for you. For example there is a flag to use NewtonSoft.JSON instead of System.Text.Json if that is your preference!

I do want to touch on a few pros and cons on using a generator like this though…

The first con is that updates to the original Open API definition really can’t be “re-generated” into the API. There are ways to do it using the tool but in reality, I find it unlikely that you would do it like this. So for the most part, the generation of the API is going to be a one time thing.

Another con is as I’ve already pointed out, the generator has it’s own style which may or may not suit the way you like to develop software. On larger API’s fixing some of these quirks of the generator can be annoying. But I would say that for the most part, fixing any small style issues is still likely to take less time than writing the entire API from scratch by hand.

Overall however, the pro of this is that you have a very consistent style. For example, I was helping out a professional services company with some of their code practices recently. What I noticed is that they spun up new API’s every month for different customers. Each API was somewhat beholden to the tech leads style and preferences. By using an API generator as a starting point, it meant that everyone had a somewhat similar starting point for the style we wanted to go for, and the style that we should use going forward.

Generating API Clients

I want to quickly touch on another functionality of the Open API Generator, and that is generating clients for an API. For example, if you have a C# service that needs to call out to a web service, how can you quickly whip up a client to interact with that API?

We can use the following command to generate a Client library project :

openapi-generator-cli generate -i petstore.yaml -g csharp -o PetStore.Client --package-name PetStore.Client

This generates a very simple PetApi interface/class that has all of our methods to call the API.

For example, take a look at this simple code :

var petApi  = new PetApi("https://myapi.com");
var myPet = petApi.GetPetById(123);
myPet.Name = "John Smith";

Unlike the server code we generated, I find that the client itself is often able to be regenerated as many times as needed, and over long periods of time too.

As I mentioned, the client code is very handy when two services need to talk to each other, but I’ve also found it useful for writing large scale integration tests without having to copy and paste large models between projects or be mindful about what has changed in an API, and copy those changes over to my test project.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Now that the flames have simmered down on the Hot Reload Debacle, maybe it’s time again to revisit this feature!

I legitimately feel this is actually one of the best things to be released with .NET in a while. The amount of frustrating times I’ve had to restart my entire application because of one small typo… whereas now it’s Hot Reload to the rescue!

It’s actually a really simple feature so this isn’t going to be too long. You’ll just have to give it a crack and try it out yourself. In short, it looks like this when used :

In case it’s too small, you can click to make it bigger. But in short, I have a console application that is inside a never ending loop. I can change the Console.WriteLine text, and immediately see the results of my change *without* restarting my application. That’s the power of Hot Reload!

And it isn’t just limited to Console Applications. It (should) work with Web Apps, Blazor, WPF applications, really anything you can think of. Obviously there are some limitations. Notably that if you edit your application startup (Or other run-once type code), your application will hot reload, it doesn’t re-run any code blocks, meaning you’ll need to restart your application to get that startup ran again. I’ve also at times had the Hot Reload fail with various errors, usually meaning I just restart and we are away again.

Honestly, one of the biggest things to get used to is the mentality of Hot Reload actually doing something. It’s very hard to “feel” like your changes have been applied. If I’m fixing a bug, and I do a reload and the bug still exists…. It’s hard for me to not stop the application completely and restart just to be sure!

Hot Reload In Visual Studio 2022

Visual Studio 2019 *does* have a hot reload functionality, but it’s less featured (Atleast for me). Thus I’m going to show off Visual Studio 2022 instead!

All we need to do is edit our application while it’s running, then look to our nice little task bar in Visual Studio for the following icon :

That little icon with two fishes swimming after each other (Or.. atleast that’s what it looks like to me) is Hot Reload. Press it, and you are done!

If that’s a little too labour intensive for you, there is even an option to Hot Reload on file save.

If you’re coming from a front end development background you’ll be used to file watchers recompiling your applications based on a save only. On larger projects I’ve found this to maybe be a little bit more pesky (If Hot Reload is having issues, having popups firing off every save is a bit annoying), but on smaller projects I’ve basically run this without a hitch everytime.

Hot Reload From Terminal

Hot Reload from a terminal or command line is just as easy. Simple run the following from your project directory :

dotnet watch

Note *without* typing run after (Just incase you used to use “dotnet watch run”). And that’s it!

Your application will now run with Hot Reload on file save switched on! Doing this you’ll see output looking something like

watch : Files changed: F:\Projects\Core Examples\HotReload\Program.cs~RF1f7ccc54.TMP, F:\Projects\Core Examples\HotReload\Program.cs, F:\Projects\Core Examples\HotReload\qiprgg31.zfd~
watch : Hot reload of changes succeeded.

And then you’re away laughing again!


Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

A big reason people who develop in .NET languages rave about Visual Studio being the number one IDE, is auto complete and intellisense features. Being able to see what methods/properties are available on a class or scrolling through overloads of a particular method are invaluable. While more lightweight IDE’s like VS Code are blazingly fast.. I usually end up spending a couple of hours setting up extensions to have it function more like Visual Studio anyway!

That being said. When I made the switch to Visual Studio 2022, there was something off but I couldn’t quite put my finger on it. I actually switched a couple of times back to Visual Studio 2019, because I felt more “productive”. I couldn’t quite place it until today.

What I saw was this :

Notice that intellisense has been extended to also predict entire lines, not just the complete of the method/type/class I am currently on. At first this felt amazing but then I started realizing why this was frustrating to use.

  1. The constant flashing of the entire line subconsciously makes me stop and read what it’s suggesting to see if I’ll use it. Maybe this is just something I would get used to but I noticed myself repeatedly losing my flow or train of thought to read the suggestions. Now that may not be that bad until you realize…
  2. The suggestions are often completely non-sensical when working on business software. Take the above suggestion, there is no type called “Category”. So it’s actually suggesting something that should I accept, will actually break anyway.
  3. Even if you don’t accept the suggestions, my brain subconsciously starts typing what they suggest, and therefore end up with broken code regardless.
  4. And all of the above is made even worse because the suggestions completely flip non-stop. In a single line, and even at times following it’s lead, I get suggested no less than 4 different types.

Here’s a gif of what I’m talking about with all 4 of the issues present.


Now maybe I’ll get used to the feature but until then, I’m going to turn it all off. So if you are like me and want the same level of intellisense that Visual Studio 2019 had, you need to go :

Tools -> Options -> Intellicode (Not intellisense!)

Then disable the following :

  • Show completions for whole lines of code
  • Show completions on new lines

After disabling these, restart Visual Studio and you should be good to go!

Again, this only affects the auto-complete suggestions for complete lines. It doesn’t affect completing the type/method, or showing you a method summary etc.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

The past few days I’ve been setting up a SonarQube server to do some static analysis of code. For the most part, I was looking for SonarQube to tell us if we had some serious vulnerabilities lurking anywhere deep in our codebases, especially some of the legacy code that was written 10+ years ago. While this sort of static analysis is pretty limited, it can pick up things like SQL Injection, XSS, and various poor security configuration choices that developers sometimes make in the heat of the moment.

And how did we do? Well.. Actually pretty good! We don’t write raw SQL queries, instead preferring to use EntityFramework Linq2SQL, which for the most part protects us from SQL injection. And most of our authentication/authorization mechanisms are out of the box .NET/.NET Core components, so if we have issues there… Then the entire .NET ecosystem has bigger problems.

What we did find though was millions of non-critical warnings such as this :

Unused "using" should be removed

I’ll be the first to admit, I’m probably more lenient than most when it comes to warnings such as this. It doesn’t make any difference to the code, and you rarely notice it anyway. Although I have worked with other developers who *always* pull things like this up in code reviews, so each to their own!

My problem was, SonarQube is right, I probably should remove these. But I really didn’t want to manually go and open each file and remove the unused using statements. I started searching around and low and behold, Visual Studio has a feature inbuilt to do exactly this!

If you right click a solution or project in Visual Studio, you should see an option to “Analyze and Code Cleanup” like so :

I recommend first selecting “Configure Code Cleanup” from this sub menu so that we can configure exactly what we want to clean up :

As you can see from the above screenshot, for me I turned off everything except removing unnecessary usings and sorting them. You can of course go through the other available fixers and add them to your clean up Profile before hitting OK.

Right clicking your Solution/Project, and selecting “Analyze and Code Cleanup” then “Run Code Analysis” will instantly run these fixers over your entire project or solution. Instantly letting you pass this pesky rule, and cleaning up your code at the same time!

Now I know, this isn’t exactly a big deal removing unused usings. I think for me, it was more the fact I didn’t even know this tool existed right there in vanilla Visual Studio.

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Ever since I started using constructor dependency injection in my .NET/.NET Core projects, there has been essentially a three step process to adding a new dependency into a class.

  1. Create a private readonly field in my class, with an underscore prefix on the variable name
  2. Edit the constructor of my class to accept the same type, but name the parameter without a prefix
  3. Set the private readonly field to be the passed in parameter in the constructor

In the end, we want something that looks like this :

public class UserService
    private readonly IUserRepository _userRepository;

    public UserService(IUserRepository userRepository)
        _userRepository = userRepository;

Well at least, this used to be the process. For a few years now, I’ve been using a nice little “power user” trick in Visual Studio to do the majority of this work for me. It looks a bit like this :

The feature of auto creating variables passed into a constructor is actually turned on by default in Visual Studio, however the private readonly with an underscore naming convention is not (Which is slightly annoying because that convention is now in Microsoft’s own standards for C# code!).

To add this, we need do the following in Visual Studio. The exact path to the setting is :

Tools => Options => Text Editor => C# => Code Style => Naming

That should land you on this screen :

The first thing we need to do is click the “Manage naming styles” button, then click the little cross to add. We should fill it out like so :

I would add that in our example, we are doing a camelCase field with an underscore prefix, but if you have your own naming conventions you use, you can also do it here. So if you don’t use the underscore prefix, or you use kebab casing (ew!) or snake casing (double ew!), you can actually set it up here too!

Then on the naming screen, add a specification for Private or Internal, using your _fieldName style. Move this all the way to the top :

And we are done!

Now, simply add parameters to the constructor and move your mouse to the left of the code window to pop the Quick Actions option, and use the “Create and Assign Field” option.

Again, you can actually do this for lots of other types of fields, properties, events etc. And you can customize all of the naming conventions to work how you like.

I can’t tell you how many times I’ve been sharing my screen while writing code, and people have gone “What was that?! How did you do that?!”, and by the same token, how many times I’ve been watching someone else code and felt how tedious it is to slowly add variables one by one and wire them up!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In the coming months I’ll be covering the new features of .NET 6, including things like MAUI (Cross platform GUI in .NET), PriorityQueue, and so much more. So I thought it would be worth talking a little bit on how to actually get set up to use .NET 6 if you are looking to touch this new tech early on.

In previous versions of .NET/.NET Core, the process to accessing preview versions was somewhat convoluted and involved installing a second “preview” version of Visual Studio to get up and running, but all that’s out the window now and things are easier than ever!

Setting Up .NET 6 SDK

To start, head over to the .NET 6 download page and download the latest SDK (not runtime!) for your operating system : https://dotnet.microsoft.com/download/dotnet/6.0

After installing, from a command prompt, you should be able to run “dotnet –info” and see something similar to the below :

.NET SDK (reflecting any global.json):
 Version:   6.0.100-preview.2.21155.3
 Commit:    1a9103db2d

It is important to note that this is essentially telling you that the default version of the .NET SDK is .NET 6 (Although, sometimes in Visual Studio, it will ignore preview versions as we will shortly see). This is important to note because any projects that don’t utilize a global.json will now be using .NET 6 (and a preview version at that). We have a short guide on how global.json can determine which SDK is used right here.

Setting Up Visual Studio For .NET 6

Now for the most part, developers of .NET will use Visual Studio. And the number one issue I find when people say “the latest version of .NET does not work in Visual Studio”, is because they haven’t downloaded the latest updates. I don’t care if a guide says “You only need version X.Y.Z” and you already have that. Obviously you need Visual Studio 2019, but no matter what anyone tells you about required versions, just be on the latest version.

You can check this by going to Help, then Check For Updates inside Visual Studio. The very latest version at the time of writing is 16.9.1, but again, download whatever version is available to you until it tells you you are up to date.

After installing the latest update, there is still one more feature flag to check. Go Tools -> Options, then select Preview Features as per the screenshot below. Make sure to tick the “Use previews of the .NET Core SDK”. Without this, Visual Studio will use the latest version of the .NET SDK installed, that is *not* a preview version. Obviously once .NET 6 is out of preview, you won’t need to do this, but if you are trying to play with the latest and greatest you will need this feature ticked.

After setting this, make sure to restart Visual Studio manually as it does not automatically do it for you. And you don’t want to be on the receiving end of your tech lead asking if you have “turned it off and on again” do you!

Migrating A Project To .NET 6

For the most part, Visual Studio will likely not create projects in .NET 6 by default when creating simple applications like console applications. This seems to vary but.. It certainly doesn’t for me. But that’s fine, all we have to do is edit our csproj file and change our target framework to net6.0 like so :


If you build and you get the following :

The current .NET SDK does not support targeting .NET 6.0.  Either target .NET 5.0 or lower, or use a version of the .NET SDK that supports .NET 6.0

The first thing to check is

  1. Do you have the correct .NET 6 SDK installed. If not, install it.
  2. Is .NET 6 still in preview? Then make sure you have the latest version of Visual Studio *and* have the “Use previews of the .NET Core SDK” option ticked as per above.

And that’s it! You’re now all set up to use the juicy goodness of .NET 6, and all those early access features in preview versions you’ve been waiting for!

Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.