Over the past couple of weeks, I’ve been covering how to use Playwright to create E2E tests using C# .NET. Thus far, I’ve covered how to write tests in a purely C# .NET testing way, that is, without BDD or any particular cucumber like syntax. It more or less means that the person writing the tests needs to know a fair bit of C# before they can write tests without getting themselves into trouble. It works, but it’s not great.

Next in the Playwright series, I was going to cover how you can use Specflow with Playwright to make it dead simple for your QA team to write E2E tests, without any knowledge of C# at all. But half way through writing, I realized that many of the topics covered weren’t about Playwright at all, it was more about my personal methodology on how I structure tests within Specflow to better enable non technical people to get involved.

Now I’m just going to say off the bat, that I *know* that people will hate the way I structure my tests. It actually runs counter to Specflow’s own documentation on how to build out a test suite. But reality often does not match the perfect world that the docs portray. And so this writeup is purely if you are a developer getting your manual testers into the groove of writing automated tests.

What Is Specflow?

Specflow is a testing framework that transforms your tests to use “Behavior Driven Development” (BDD) type language to build out your test suite. In simple terms, it takes the language of

Given
When
Then

And turns them into automated tests.

That’s the holistic view. The more C# .NET centric view is that it’s a Visual Studio addon, that maps BDD type language to individual methods. For example, if I have a have a BDD line that says this :

Given I am on URL 'https://dotnetcoretutorials.com'

It will be “mapped” to a C# method such as

[Given("Given I am on '(.*)'")]
public void GivenIAmOnUrl(string url)
{
    _driver.NavigateToUrl(url);
}

The beauty of this type of development is that should another test require a similar type of navigation to a URL, you simply write the same BDD line and it will map to the same method automatically under the hood.

In addition, Specflow comes with other pieces of the testing framework such as a lightweight DI, a test runner, assertions library etc.

The thing about Specflow however, is that it in of itself does not automate the browser at all. It actually passes that off to things like Selenium or Playwright, you can even use your favorite assertion library like NUnit, MSTest or XUnit. Specflow should be seen as a lightweight testing framework that enforces BDD like test writing, but what goes under the hood is still largely up to you.

I don’t want to dig too deep into getting Specflow up and running because the documentation is actually fairly good and it’s more or less just installing a Visual Studio Extension. So download it, give it a crack, and then come back here when you are ready to continue.

The Page Object Model

Specflow (And more specifically Selenium) typically suggest a “Page Object Model” pattern to encapsulate all logic for a given page. This promotes reusability and means that all logic, selectors, and page behaviour is located in a single place. If across tests you are visiting the same page and clicking the same button, it does make sense to encapsulate that logic somewhere.

Let’s imagine that I’m trying to write a test that goes to Google and types in a search value, then clicks the search button. The “Page Object Model” would look something like this :

public class GooglePageObject
{
    private const string GoogleUrl = "https://google.com";
    //The Selenium web driver to automate the browser
    private readonly IWebDriver _webDriver;
    public GooglePageObject(IWebDriver webDriver)
    {
        _webDriver = webDriver;
    }
    //Finding elements by ID
    private IWebElement SearchBoxElement => _webDriver.FindElement(By.Xpath("//input[@title='Search']"));
    private IWebElement SearchButtonElement => _webDriver.FindElement(By.Xpath("(//input[@value='Google Search'])[2]"));
    public void EnterSearchValue(string text)
    {
        //Clear text box
        SearchBoxElement.Clear();
        //Enter text
        SearchBoxElement.SendKeys(text);
    }
    public void PressSearchButton()
    {
        SearchButtonElement.Click();
    }
}

This encapsulates all logic for how we interact with the Google Search page in this one class. To follow on, the Specflow steps might look like so in C# code :

[Given("the search value is (.*)")]
public void GivenTheSearchValueIs(string text)
{
    _googlePageObject.EnterSearchValue(text);
}
[When("the search button is clicked")]
public void WhenTheSearchButtonIsClicked()
{
    _googlePageObject.PressSearchButton();
}

Simple enough! And our Specflow steps would obviously read :

Given the search value is ABC
When the search button is clicked
[...]

Now this obviously works, but here’s the problem I have with it.

The entire test has more or less been written by a developer (Or an automated QA specialist). There is no way that the encapsulation of this page could be written in C# by someone who doesn’t themselves know C# a decent amount. Additionally, while not pictured, there is an entire dependency injection flow to actually inject the page object model into our tests. Can you imagine explaining dependency injection to someone who has never written a line of code in their lives?

Furthermore, let’s say that on this google search page, we wish to click the “I’m feeling lucky” button in a future test

The addition of this button being able to be used in tests requires someone to write the C# code to support it. Of course, once it’s done, it’s done and can be re-used across tests but again.. I find it isn’t so much testers writing their own automated tests as much as developers doing it for them, and a QA slapping some BDD on the top at the very end.

Creating Re-usable BDD Steps

And this is where we get maybe a little bit controversial. The way I format tests does away with the Page Object Model pattern, in fact, it almost does away with some parts of BDD all together. If I was to write these tests, here’s how I would create the steps :

[Given("I type '(.*)' into a textbox with xpath '(.*)'")]
public void WhenITypeIntoATextboxWithXpath(string text, string xpath)
{
    _webDriver.FindElement(By.Xpath(xpath)).SendKeys(text);
}
[When("I click the button with xpath '(.*)'")]
public void WhenIClickTheButtonWithXpath(string xpath)
{
    _webDriver.FindElement(By.Xpath(xpath)).Click();
}

And the BDD would look like :

Given I type 'ABC' into a textbox with xpath '//input[@title="Search"]'
When I click the button with xpath '(//input[@value="Google Search"])[2]'
[...]

I can even create more simplified steps based off just element IDs:

[Given("I type '(.*)' into a textbox with Id '(.*)'")]
public void WhenITypeIntoATextboxWithId(string text, string id)
{
    _webDriver.FindElement(By.Id(id)).SendKeys(text);
}

By creating steps like this, I actually have to write very minimal C# code. I’ve even created steps that are “When an element ‘div’ has a property ‘class’ of value ‘myClass'”. Now instead of having to front load a tonne of C# training to my manual QA, I instead teach them about XPath. I give a nice 1 hour lesson on using Chrome Dev Tools to find elements, show them how to test whether their XPath will work correctly, and away we go.

Typically I can spend a day creating a bunch of re-usable steps, and then testers only ever have to worry about writing BDD style Given/When/Then that uses existing selectors.

The Good, The Bad, The Ugly

When it comes to looking at the good of this test writing strategy, it’s easy to see that enabling testers to focus on writing BDD and actually coming up with the test scenarios allows immature or non technical QA teams to start writing tests from Day 1.

It does come at a cost however. BDD is designed that the same test scenarios a tester would run manually, can essentially be cloned to be automated. However a human would never write a test that specifically calls out XPath in the test steps, nor would they realistically be able to execute a test manually that had XPath littered throughout the test description.

For me, getting testers into the mode of writing automated tests without the overhead of learning C# far outweigh any loss of readability, and that’s why I continue to create test suites that follow this pattern.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a post in a series about the automated E2E testing framework Playwright. While you can start anywhere, it’s always best to start right at the beginning!

Part 1 – Intro
Part 2 – Trace Viewer


A massive selling point in using a automated test runner such as Cypress.io, is that it can record videos and take screenshots of all of your test runs right out of the box. It comes with a pretty nifty viewer too that allows you to group tests by failures, and then view the actual video of the test being executed.

If we compare that to Selenium, I mean.. Selenium simply does not have that feature. It’s not to say that it can’t be done, you just don’t get it out of the box. In most cases, I’ve had automation testers simply take a screenshot on test failure and that’s it. Often the final screenshot of a failed step is enough to debug what went wrong, but not always. Additionally, there is no inbuilt tool for “viewing” these screenshots, and while MS Paint is enough to open a simple image file, it can get confusing managing a bunch of screenshots in your downloads folder!

Playwright is somewhere in the middle of these. While it doesn’t record actual videos, it can take screenshots of each step along the way taking before and after shots, and it provides a great viewer to pinpoint exactly went wrong. While it works out of the box, there is a tiny bit of configuration required.

I’m going to use our example from our previous post which uses MSTest, and add to it a little. However, the steps are largely the same if you are using another testing framework or no framework at all.

The full “traceable” MSTest code looks like so :

[TestClass]
public class MyUnitTests : PageTest
{
    [TestInitialize]
    public async Task TestSetup()
    {
        await Context.Tracing.StartAsync(new TracingStartOptions
        {
            Title = TestContext.TestName, //Note this is for MSTest only. 
            Screenshots = true,
            Snapshots = true,
            Sources = true
        });
    }
    [TestCleanup]
    public async Task TestCleanup()
    {
        await Context.Tracing.StopAsync(new TracingStopOptions
        {
            Path = TestContext.TestName + ".zip"
        });
    }
    [TestMethod]
    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()
    {
        //Our Test Code here. Removed for brevity. 
    }
}

Quite simply :

  • Our TestInitialize method kicks off the tracing for us. The “TestContext” object is a MSTest specific class that can tell us which test is under execution, you can swap this out for a similar class in your test framework or just put any old string in there.
  • Our TestCleanup essentially ends the trace, storing the results in a .zip file.

And that’s it!

In our bin folder, there will now be a zip file for each of our tests. Should one fail, we can go in here and retrieve the zip. Unlike Cypress, there isn’t an all encompassing viewer where we can group tests, their results and videos. This is because Playwright for .NET is relying a bit on both MSTest and Visual Studio to be test runners, and so there is a bit of a break in tooling when you then want to view traces, but it’s not that much leg work.

Let’s say our test broke, and we have retrieved the zip. What do we do with it? While you can download a trace viewer locally, I prefer to use Playwright’s hosted version right here https://trace.playwright.dev/

We simply drag and drop our zip file and tada!

I know that’s a lot to take in, so let’s walk through it bit by bit!

Along the top of the page, we have the timeline. This tells us over time how our test ran and the screenshots for each time period. The color coding tells us when actions changed/occurred so we can immediately jump to a certain point in our test.

To the left of the screen, we have our executed steps, we can click on these to immediately jump in the timeline.

In the centre of the page we have our screenshot. But importantly we can switch tabs for a “Before” and “After” view. This is insanely handy when inputting text or filling out other page inputs. Imagine that the test is trying to fill out a very large form, and on the very first step it doesn’t fill in a textbox correctly. The test may not fail until the form is submitted and validation occurs, but in the screenshot of the failure, we may not even be able to see the textbox itself. So this gives us a step by step view of every step occurring as it happens.

 

To the right of the screen you’ve got a bunch of debug information including Playwright’s debug output, the console output of the browser, the network output of the browser (Similar to Chrome/Firefox dev tools), but importantly, you’ve also got a snapshot of your own code and which step is running. For instance, here I am looking at the step to fill a textbox.

This is *insanely* helpful. If a test fails, we can essentially know exactly where in our own code it was up to and what it was trying to do, without thinking “Well, it was trying to type text into a textbox, let me try and find where that happens in my code”.

And that’s the Playwright Test Trace Viewer. Is it as good as Cypress’ offering? Probably not quite yet. I would love to see some ability to capture a single zip for an entire test run, not per test case (And if I’ve missed that, please let me know!), but for debugging a single test failure, I think the trace viewer is crazy powerful and yet another reason to give Playwright a try if you’re currently dabbling with Selenium.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

This is a post in a series about the automated E2E testing framework Playwright. While you can start anywhere, it’s always best to start right at the beginning!

Part 1 – Intro
Part 2 – Trace Viewer


These days, end to end browser testing is a pretty standard practice amongst modern development teams. Whether that’s with Selenium, WebDriver.IO or Cypress, realistically as long as you are getting the tests up and running, you’re doing well.

Over the past couple of years, Cypress had become a defacto end to end testing framework. I don’t think in the last 5 years I’ve been at a company that hasn’t atleast given it a try and built out some proof of concepts. And look, I like Cypress, but after some time I started getting irritated with a few caveats (Many of which are listed by Cypress themselves here).

Notably :

  • The “same origin” URL limitation (Essentially you must be on the same root domain for the entire test) is infuriating when many web applications run some form of SSO/OAuth, even if using something like Auth0 or Azure AD B2C. So you’re almost dead in the water immediately.
  • Cypress does not handle multiple tabs
  • Cypress cannot run multiple browsers at the same time (So testing some form of two way communication between two browsers is impossible)
  • The “Promise” model and chaining of steps in a test seemed ludicrously unwieldy. And when trying to get more junior team members to write tests, things quickly entered into a “Pyramid of doom“.

As I’ll talk about later in another post, the biggest thing was that we wanted a simple model for users writing tests in Gherkin type BDD language. We just weren’t getting that with Cypress and while I’m sure people will tell me all the great things Cypress can do, I went out looking for an alternative.

I came across Playwright, a cross platform, cross browser automation testing tool that did exactly what it says on the tin with no extras. Given my list of issues above with Cypress, I did have to laugh that this is a very prominent quote on their homepage :

Multiple everything. Test scenarios that span multiple tabs, multiple origins and multiple users. Create scenarios with different contexts for different users and run them against your server, all in one test.

They definitely know which audience they are playing up to.

Playwright has support for tests written in NodeJS, Java, Python and of course, C# .NET. So let’s take a look at the latter and how much work it takes to get up and running.

For an example app, let’s assume that we are going to write a test that has the following test scenario :

Given I am on https://www.google.com
When I type dotnetcoretutorials.com into the search box
And I press the button with the text "Google Search"
Then the first result is domain dotnetcoretutorials.com

Obviously this is a terrible example of a test as the result might not always be the same! But I wanted to just show a little bit of a simple test to get things going.

Let’s get cracking on a C# test to execute this!

Now the thing with Playwright is, it’s actually just a C# library. There isn’t some magical tooling that you have to download or extensions to Visual Studio that you need to get everything working nicely. You can write everything as if you were writing a simple C# unit test.

For this example, let’s just create a simple MSTest project in Visual Studio. You can of course create a test project with NUnit, XUnit or any other testing framework you want and it’s all going to work much the same.

Next, let’s add the PlayWright nuget package with the following command in our Package Manager Console. Because we are using MSTest, let’s add the MSTest specific Nuget package as this has a few helpers that speed things up in the future (Realistically, you don’t actually need this and can install Microsoft.Playwright if you wish)

Install-Package Microsoft.Playwright.MSTest

Now here’s my test. I’m going to dump it all here and then walk through a little bit on how it works.

[TestClass]
public class MyUnitTests : PageTest
{
    [TestMethod]
    public async Task WhenDotNetCoreTutorialsSearchedOnGoogle_FirstResultIsDomainDotNetCoreTutorialsDotCom()
    {
        //Given I am on https://www.google.com
        await Page.GotoAsync("https://www.google.com");
        //When I type dotnetcoretutorials.com into the search box
        await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");
        //And I press the button with the text "Google Search"
        await Page.ClickAsync("[value='Google Search'] >> nth=1");
        //Then the first result is domain dotnetcoretutorials.com
        var firstResult = await Page.Locator("//cite >> nth=0").InnerTextAsync();
        Assert.AreEqual("https://dotnetcoretutorials.com", firstResult);
    }
}

Here’s some things you may notice!

First, our unit test class inherits from “PageTest” like so :

public class MyUnitTests : PageTest

Why? Well because the PlayWright.MSTest package contains code to set up and tear down browser objects for us (And it also handles concurrent tests very nicely). If we didn’t use this package, either because we are using a different test framework or we want more control, the set up code would look something like :

IPage Page;
[TestInitialize]
public async Task TestInitialize()
{
    var playwright = await Playwright.CreateAsync();
    var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
    {
        Headless = false
    });
    Page = await browser.NewPageAsync();
}

So it’s not the end of the world, but it’s nice that the framework can handle it for us!

Next what you’ll notice is that there are no timeouts *and* all methods are async. By timeouts, what I mean is the bane of every selenium developers existence is “waiting” for things to show up on screen, especially in javascript heavy web apps.

For example, take these two calls one after the other :

//Given I am on https://www.google.com
await Page.GotoAsync("https://www.google.com");
//When I type dotnetcoretutorials.com into the search box
await Page.FillAsync("[title='Search']", "dotnetcoretutorials.com");

In other frameworks we might have to :

  • Add some sort of arbitrary delay after the GoTo call to wait for the page to properly load
  • Write some code to check if a particular element is on screen before continuing (Like a WaitUntil type call)
  • Write some custom code for our Fill method that will poll or retry until we can find that element and type

Instead, Playwright handles that all under the hood for you and assumes that when you want to fill a textbox, that eventually it’s going to show and so it will wait till it does. The fact that everything is async also means it’s non-blocking, which is great if you are using playwright locally since it’s not gonna freeze everything on your screen for seconds at a time!

The rest of the test should be pretty self explanatory, we are using some typical selectors to fill out the google search and find the top result, and our Assert is from our own framework. Playwright does come packaged with it’s own assertion framework, but you don’t have to use it if you don’t want to!

And.. That’s it!

There are some extremely nifty tools that come packaged with Playwright that I’m going to write about in the coming days, including the ability to wire up with Specflow for some BDD goodness. What I will say so far is that I like the fact that Playwright has hit the right balance between being an automation test framework *and* being able to do plain browser automation (For example to take a screenshot of a web page). Cypress clearly leans on the testing side, and Selenium I feel often doesn’t feel like a testing framework as much as it feels like a scripting framework that you can jam into your tests. So far, so good!

Next up, I wanted to take a look at the Playwright inbuilt “Trace Viewer”, check out that post here : https://dotnetcoretutorials.com/2022/05/24/using-playwright-e2e-tests-with-c-net-part-2-trace-viewer/

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Visual Studio 2022 17.2 shipped the other day, and in it was a handy little feature that I can definitely see myself using a lot going forward. That is the IEnumerable Visualizer! But before I dig into what it does (And really it’s quite simple), I wanted to quickly talk about why it was so desperately needed.

Let’s imagine I have a class like so :

class Person
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

And somewhere in my code, I have a List of people with a breakpoint on it. Essentially, I want to quickly check the contents of this list and make sure while debugging that I have the right data in the right place.

Our first port of call might be to simply is do the “Hover Results View” method like so :

But… As we can see it doesn’t exactly help us to view the contents easily. We can either then go and open up each item individually, or in some cases we can override the ToString method. Neither of which may be preferable or even possible depending on our situation.

We can of course use the “Immediate Window” to run queries against our list if we know we need to find something in particular. Something like :

? people.Where(x => x.FirstName == "John")

Again, it’s very adhoc and doesn’t give us a great view of the data itself, just whether something exists or not.

Next, we can use the Autos/Watch/Locals menu which does have some nice pinning features now, but again, is a tree view and so it’s hard to scroll through large pieces of data easily. Especially if we are trying to compare multiple properties at once.

But now (Again, you require Visual Studio 2022 17.2), notice how in the Autos view we have a little icon called “View” right at the top of the list there. Click that and…

This is the new IEnumerable visualizer! A nice tabular view of the data, that you can even export to excel if you really need to. While it’s a simple addition and really barebones, it’s something that will see immediate use in being able to debug your collections more accurately.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

I was recently asked by another developer on the difference between making a method virtual/override, and simply hiding the method using the new keyword in C#.

I gave him what I thought to be the best answer (For example, you can change the return type when using the “new” keyword), and yet while showing him examples I managed to bamboozle myself into learning something new after all these years.

Take the following code for instance, what will it print?

Parent childOverride = new ChildOverride();
childOverride.WhoAmI();
Parent childNew = new ChildNew();
childNew.WhoAmI();
class Parent
{
    public virtual void WhoAmI()
    {
        Console.WriteLine("Parent");
    }
}
class ChildOverride : Parent
{
    public override void WhoAmI()
    {
        Console.WriteLine("ChildOverride");
    }
}
class ChildNew : Parent
{
    public new void WhoAmI()
    {
        Console.WriteLine("ChildNew");
    }
}

At first glance, I assumed it would print the same thing either way. After all, I’m basically newing up the two different types, and in *both* cases I am casting it to the parent.

When casting like this, I like to tell junior developers that an object “Always remembers who it is”. That is, my ChildOverride can be cast to a Parent, or even an object, and it still remembers that it’s a ChildOverride.

So what does the above code actually print out?

ChildOverride
Parent

So our Override method remembered who it was, and therefore it’s “WhoAmI” method. But our ChildNew did not… Kinda.

Why you might ask? Well it actually is quite simple if you think about it.

When you use the override keyword, it’s overriding the base class and there is a sort of “linkage” between the two methods. That is, it’s known that the child class is an override of the base.

When you use the new keyword, you are saying that the two methods are in no way related. And that your new method *only* exists on the child class, not on the parent. There is no “linkage” between the two.

This is why when you cast to the parent class, the overridden method is known, and the “new” method is not.

With all that being said, in many many years of programming in C# I have seldomly used the new keyword to override methods like this. Not only is there very little reason to do so, but it breaks a core SOLID principle in the Liskov Principle : https://dotnetcoretutorials.com/2019/10/20/solid-in-c-liskov-principle/

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Here’s another one from the vault of “Huh, I guess I never thought I needed that until now”

Recently I was trying to write a unit test that required me to paste in some known JSON to validate against. Sure, I could load the JSON from a file, but I really don’t like File IO in unit tests. What it ended up looking like was something similar to :

var sample = "{\"PropertyA\":\"Value\"}";

Notice those really ugly backslashes in there trying to escape my quotes. I get this a lot when working with JSON or even HTML string literals and my main method for getting around it is loading into notepad with a quick find and replace.

Well, starting from C# 11, you can now do the following!

var sample = """
{"PropertyA":"Value"}
""";

Notice those (ugly) three quote marks at the start and end. That’s the new syntax for “Raw String Literals”. Essentially allowing you to mix in unescaped characters without having to start backslashing like a madman.

Also supported is multi line strings like so :

var sample = """
{
    "PropertyA" : "Value"
}
""";

While this feature is officially coming in C# 11 later this year, you can get a taste for it by adding the following to your csproj file.

<LangVersion>preview</LangVersion>

I would say that editor support is not all too great now. The very latest Visual Studio 2022 seems to handle it fine, however inside VS Code I did have some issues (But it still compiled just fine).

One final thing to note is about the absence of the “tick” ` character. When I first heard about this feature, I just assumed it would use the tick character as it’s pretty synonymous with multi line raw strings (atleast in my mind). So I will include the discussion from Microsoft about whether they should use the tick character or not here : https://github.com/dotnet/csharplang/blob/main/proposals/raw-string-literal.md#alternatives

With the final decision being

In keeping with C# history, I think " should continue to be the string literal delimiter

I’m less sure on that. I can’t say that three quote marks makes any more sense than a tick, especially when it comes to moving between languages so… We shall see if this lasts until the official release.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

User Secrets (Sometimes called Secret Manager) in .NET has been in there for quite some time now (I think since .NET Core 2.0). And I’ve always *hated* it. I felt like they encouraged developers to email/slack/teams individual passwords or even entire secret files to each other and call it secure. I also didn’t really see a reason why developers would have secrets locally that were not shared with the wider team. For that reason, a centralized secret storage such as Azure Keyvault was always preferable.

But over the past few months. I’ve grown to see their value… And in reality, I use User Secrets more for “this is how local development works on my machine”, rather than actual secrets. Let’s take a look at how User Secrets work and how they can be used, and then later on we can talk more about what I’ve been using them for.

Creating User Secrets via Visual Studio

By far the easiest way to use User Secrets is via Visual Studio. Right click your entry project and select “Manage User Secrets”.

Visual Studio will then work out the rest, installing any packages you require and setting up the secrets file! Easy!

You should be presented with an empty secrets file which we will talk about later.

Even if you use Visual Studio. I highly recommend at least reading the section below on how to do things from the command line. It will explain how things work behind the scenes and will likely explain away any questions you have about what Visual Studio is doing under the hood.

Creating User Secrets via Command Line

We can also create User Secrets via the command line! To do so, we need to run the following command in our project folder :

dotnet user-secrets init

The reality is, all this does is generate a guid and place it into your csproj file. It looks a bit like so :

<UserSecretsId>6272892f-ffcd-4039-b82a-b60874e91fce</UserSecretsId>

If you really wanted, you could generate this guid yourself and place it here, there is nothing special about it *except* that between projects on your machine, the guid must be unique. Of course, if you wanted projects to share secrets then you could of course use the same guid across projects.

From here, you can now set secrets from the command line. It seems janky, but unfortunately you *must* create a secret via the command line before you can edit the secrets file in a notepad. It seems annoying but.. That’s how it works. So in your project folder run the following command :

dotnet user-secrets set "MySecret" "12345"

So.. What does this actually do? It’s quite simple actually. On Windows, you will have the following file :

%APPDATA%\Microsoft\UserSecrets\{guid}\secrets.json

And on Linux :

~/.microsoft/usersecrets/{guid}/secrets.json

Opening this file, you’ll see something like :

{
    "MySecret" : "12345"
}

And from this point on you can actually edit this file in notepad and forget the command line all together. In reality, you could also even create this file manually and never use the command line to add the initial secret as well. But I just wanted to make note that the file *does not* exist until you add your first secret. And, as we will see later, if you have a user secret guid in your csproj file, but you don’t have the corresponding file, you’ll actually throw errors which is a bit frustrating.

With all of this, when you use Visual Studio, it essentially does all of this for you. But I still think it’s worth understanding where these secrets get stored, and how it’s just a local file on your machine. No magic!

Using User Secrets In .NET Configuration

User Secrets follow the same paradigm as all other configuration in .NET. So if you are using an appsettings.json, Azure Keyvault, Environment Variables etc. It’s all the same, even with User Secrets.

If you installed via the Command Line, or you just want to make sure you have the right packages, you will need to install the following nuget package :

Install-Package  Microsoft.Extensions.Configuration.UserSecrets

The next step is going to depend if you are using .NET 6 minimal API’s or .NET 5 style Startup classes. Either way, you probably by now understand where you are adding your configuration to your project.

For example, in my .NET 6 minimal API I have something that looks like so :

builder.Configuration.AddEnvironmentVariables()
                     .AddKeyVault()
                     .AddUserSecrets(Assembly.GetExecutingAssembly(), true);

Notice I’m passing “true” as the second variable for UserSecrets. That’s because in .NET 6, User Secrets were made “required” by default and by passing true, we make them optional. This is important as if users have not set up the user secret file on their machine yet, this whole thing will blow up if not made optional. The exception will be something like :

System.IO.FileNotFoundException: The configuration file 'secrets.json' was not found and is not optional

Now, our User Secrets are being loaded into our configuration object. Ideally, we should place User Secrets *last* in our configuration pipeline because it means they will be the last overwrite to happen. And… That’s it! Pretty simple. But what sort of things will we put in User Secrets?

What Are User Secrets Good For?

I think contrary to the name, User Secrets are not good for Secrets at all, but instead user specific configuration. Let me give you an example. On a console application I was working with, all but one developer were using Windows machines. This worked great because we had local file path configuration, and this obviously worked smoothly on Windows. However, the Linux user was having issues. Originally, the developer would download the project, edit the appsettings, and run the project fine. When it came time to check in work, they would have to quickly revert or ignore the changes in appsettings so that they didn’t get pushed up. Of course, this didn’t always happen and while it was typically caught in code review, it did cause another round of branch switching, and changes to be pushed.

Now we take that same example and put User Secrets over the top. Now the Linux developer simply edits their User Secrets to change the file paths to suit their machine. They never touch appsettings.json at all, and everything works just perfectly.

Take another team I work with. They had in the past worked with a shared remote database in Azure for local development. This was causing all sorts of headaches when developers were writing or testing SQL migrations. Often their migrations would break other developers. Again, to not break the existing developers flow, I created User Secrets and showed the team how they could override the default SQL connection string to instead use their local development machine so we could slowly ween ourselves away from using a shared database.

Another example on a similar vein. The amount of times I’ve had developers install SQL server on their machine either as /SQLExpress or /MSSQLSERVER rather than a non-named instance. It happens all the time. Again, while I’m trying to help these developers out, sometimes it’s easier to just say, please add a user secret for your specific set up if you need it and we can resolve the issue later. It almost becomes an unblocking mechanism that developers can actually control their own configuration.

What I don’t think User Secrets are good for are actual secrets. So for example, while creating an emailing integration, a developer put a Sendgrid API key in their User Secrets. But what happens when he pushes that code up? Is he just going to email that secret to developers that need it? It doesn’t really make sense. So anything that needs to be shared, should not be in User Secrets at all.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

Imagine an Ecommerce system that generates a unique order number each time a customer goes to the checkout. How would you generate this unique number?

You could :

  • Use the primary key from the database
  • Select the MAX order number from the database and increment by one
  • Write a custom piece of code that uses a table with a unique constraint to “reserve” numbers
  • Use SQL Sequences to generate unique values

As you’ve probably guessed by the title of this post, I want to touch on that final point because it’s a SQL Server feature that I think has gone a bit under the radar. I’ll be the first to admit that it doesn’t solve all your problems (See limitations at the end of this post), but you should know it exists and what it’s good for. Half the battle when choosing a solution is just knowing what’s out there after all!

SQL Sequences are actually a very simple and effective way to generate unique incrementing values in a threadsafe way. That means as your application scales, you don’t have to worry about two users clicking the “order” button on your ecommerce site at exactly the same time, and being given the exact same order number.

Getting Started With SQL Sequences

Creating a Sequence in SQL Server is actually very simple.

CREATE SEQUENCE TestSequence
START WITH 1
INCREMENT BY 1

Given this syntax, it’s probably obvious to you the sort of different options you can do. You can for example, always increment by the sequence by 2 :

CREATE SEQUENCE TestSequence
START WITH 1
INCREMENT BY 2

Or you can even descend instead of ascend :

CREATE SEQUENCE TestSequence
START WITH 0
INCREMENT BY -1

And to get the next value, we just need to run SQL like :

SELECT NEXT VALUE FOR TestSequence

It really is that simple! Not only that, you can view Sequences in SQL Management Studio as well (Including being able to create them, view the next value without actually requesting it etc). Simply look for the Sequences folder under Programmability.

Entity Framework Support

Of course you are probably using Entity Framework with .NET/SQL Server, so what about first class support there? Well.. It is supported but it’s not great.

To recreate our sequence as above, we would override the OnModelCreating of our DbContext (e.g. Where we would put all of our configuration anyway). And add the following :

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.HasSequence("TestSequence", x => x.StartsAt(1).IncrementsBy(1));
}

That creates our sequence, but how about using it? Unfortunately, there isn’t really a thing to “get” the next value (For example if you needed it in application code). Most of the documentation revolves around using it as a default value for a column such as :

modelBuilder.Entity<Order>()
    .Property(o => o.OrderNo)
    .HasDefaultValueSql("NEXT VALUE FOR TestSequence");

If you are looking to simply retrieve the next number in the sequence and use it somewhere else in your application, unfortunately you will be writing raw SQL to achieve that. So not ideal.

With all of that being said however, if you use Entity Framework migrations as the primary way to manage your database, then the ability to at least create sequences via the ModelBuilder is still very very valuable.

Limitations

When it comes to generating unique values for applications, my usage of SQL Sequences has actually been maybe about 50/50. 50% of the time it’s the perfect fit, but 50% of the time there are some heavy “limitations” that get in the way of it being actually useful.

Some of these limitations that I’ve run into include :

When you request a number from a sequence, no matter what happens from that point on, the sequence is incremented. Why this is important is imagine you are creating a Customer in the database, and you request a number from the sequence and get “156”. When you go to insert that Customer, a database constraint fails and the customer is not inserted. The next Customer will still be inserted as “157”, regardless of the previous insert failing. In short, sequences are not part of transactions and do not “lock and release” numbers. This is important because in some cases, you may not wish to have a “gap” in the sequence at all.

Another issue is that sequences cannot be “partitioned” in any way. A good example is a system I was building required unique numbers *per year*. And each year, the sequence would be reset. Unfortunately, orders could be backdated and therefore simply waiting until Jan 1st and resetting the sequence was not possible. What would be required is a sequence created for say the next 10 years, and for each of these be managed independently. It’s not too much of a headache, but it’s still another bit of overhead.

In a similar vein, multi tenancy can make sequences useless. If you have a single database in an ecommerce SAAS product supporting say 100 tenants. You cannot use a single sequence for all of them. You would need to create multiple sequences (One for each tenant), which again, can be a headache.

In short, sequences are good when you need a single number incrementing for your one tenant database. Anything more than that and you’re going to have to go with something custom or deal with managing several sequences at once, and selecting the right one at the right time with business logic.

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In a previous post, we talked about how we could soft delete entities by setting up a DateDeleted column (Read that post here : https://dotnetcoretutorials.com/2022/03/16/auto-updating-created-updated-and-deleted-timestamps-in-entity-framework/) But if you’ve ever done this (Or used a simple “IsDeleted” flag), you’ll know that it becomes a bit of a burden to always have the first line of your query go something like this :

dbSet.Where(x => x.DateDeleted == null);

Essentially, you need to remember to always be filtering out rows which have a DateDeleted. Annoying!

Microsoft have a great way to solve this with what’s called “Global Query Filters”. And the documentation even provides an example for how to ignore soft deletes in your code : https://docs.microsoft.com/en-us/ef/core/querying/filters

The problem with this is that it only gives examples on how to do this for each entity, one at a time. If your database has 30 tables, all with a DateDeleted flag, you’re going to have to remember to add the configuration each and every time.

In previous versions of Entity Framework, we could get around this by using “Conventions”. Conventions were a way to apply configuration to a broad set of Entities based on.. well.. conventions. So for example, you could say “If you see an IsDeleted boolean field on an entity, we always want to add a filter for that”. Unfortunately, EF Core does not have conventions (But it may land in EF Core 7). So instead, we have to do things a bit of a rinky dink way.

To do so, we just need to override the OnModelCreating to handle a bit of extra code (Of course we can extract this out to helper methods, but for simplicity I’m showing where it goes in our DBContext).

public class MyContext: DbContext
{
	protected override void OnModelCreating(ModelBuilder modelBuilder)
	{
		foreach (var entityType in modelBuilder.Model.GetEntityTypes())
		{ 
			//If the actual entity is an auditable type. 
			if(typeof(Auditable).IsAssignableFrom(entityType.ClrType))
			{
				//This adds (In a reflection type way), a Global Query Filter
				//https://docs.microsoft.com/en-us/ef/core/querying/filters
				//That always excludes deleted items. You can opt out by using dbSet.IgnoreQueryFilters()
				var parameter = Expression.Parameter(entityType.ClrType, "p");
				var deletedCheck = Expression.Lambda(Expression.Equal(Expression.Property(parameter, "DateDeleted"), Expression.Constant(null)), parameter);
				modelBuilder.Entity(entityType.ClrType).HasQueryFilter(deletedCheck);
			}
		}
		
		base.OnModelCreating(modelBuilder);
	}
}

What does this do?

Of course, we can use this same loop to add other “Conventions” too. Things like adding an Index to the DateDeleted field is possible via the OnModelCreating override.

Now, whenever we query the database, Entity Framework will automatically filter our soft deleted entities for us!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.

In any database schema, it’s extremely common to have the fields “DateCreated, DateUpdated and DateDeleted” on almost every entity. At the very least, they provide helpful debugging information, but further, the DateDeleted affords a way to “soft delete” entities without actually deleting them.

That being said, over the years I’ve seen some pretty interesting ways in which these have been implemented. The worst, in my view, is writing C# code that specifically updates the timestamp when created or updated. While simple, one clumsy developer later and you aren’t recording any timestamps at all. It’s very prone to “remembering” that you have to update the timestamp. Other times, I’ve seen database triggers used which.. works.. But then you have another problem in that you’re using database triggers!

There’s a fairly simple method I’ve been using for years and it involves utilizing the ability to override the save behaviour of Entity Framework.

Auditable Base Model

The first thing we want to do is actually define a “base model” that all entities can inherit from. In my case, I use a base class called “Auditable” that looks like so :

public abstract class Auditable
{
	public DateTimeOffset DateCreated { get; set; }
	public DateTimeOffset? DateUpdated { get; set; }
	public DateTimeOffset? DateDeleted { get; set; }
}

And a couple of notes here :

  • It’s an abstract class because it should only ever be inherited from
  • We use DateTimeOffset because we will then store the timezone along with the timestamp. This is a personal preference but it just removes all ambiguity around “Is this UTC?”
  • DateCreated is not null (Since anything created will have a timestamp), but the other two dates are! Note that if this is an existing database, you will need to allow nullables (And work out a migration strategy) as your existing records will not have a DateCreated.

To use the class, we just need to inherit from it with any Entity Framework model. For example, let’s say we have a Customer object :

public class Customer : Auditable
{
	public int Id { get; set; }
	public string Name { get; set; }
}

So all the class has done is mean we don’t have to copy and paste the same 3 date fields everywhere, and that it’s enforced. Nice and simple!

Overriding Context SaveChanges

The next thing is maybe controversial, and I know there’s a few different ways to do this. Essentially we are looking for a way to say to Entity Framework “Hey, if you insert a new record, can you set the DateCreated please?”. There’s things like Entity Framework hooks and a few nuget packages that do similar things, but I’ve found the absolute easiest way is to simply override the save method of your database context.

The full code looks something like :

public class MyContext: DbContext
{
	public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = default)
	{
		var insertedEntries = this.ChangeTracker.Entries()
							   .Where(x => x.State == EntityState.Added)
							   .Select(x => x.Entity);
		foreach(var insertedEntry in insertedEntries)
		{
			var auditableEntity = insertedEntry as Auditable;
			//If the inserted object is an Auditable. 
			if(auditableEntity != null)
			{
				auditableEntity.DateCreated = DateTimeOffset.UtcNow;
			}
		}
		var modifiedEntries = this.ChangeTracker.Entries()
				   .Where(x => x.State == EntityState.Modified)
				   .Select(x => x.Entity);
		foreach (var modifiedEntry in modifiedEntries)
		{
			//If the inserted object is an Auditable. 
			var auditableEntity = modifiedEntry as Auditable;
			if (auditableEntity != null)
			{
				auditableEntity.DateUpdated = DateTimeOffset.UtcNow;
			}
		}
		return base.SaveChangesAsync(cancellationToken);
	}
}

Now you’re context may have additional code, but this is the bare minimum to get things working. What this does is :

  • Gets all entities that are being inserted, checks if they inherit from auditable, and if so set the Date Created.
  • Gets all entities that are being updated, checks if they inherit from auditable, and is so set the Date Updated.
  • Finally, call the base SaveChanges method that actually does the saving.

Using this, we are essentially intercepting when Entity Framework would normally save all changes, and updating all timestamps at once with whatever is in the batch.

Handling Soft Deletes

Deletes are a special case for one big reason. If we actually try and call delete on an entity in Entity Framework, it gets added to the ChangeTracker as… well… a delete. And to unwind this at the point of saving and change it to an update would be complex.

What I tend to do instead is on my BaseRepository (Because.. You’re using one of those right?), I check if an entity is Auditable and if so, do an update instead. The copy and paste from my BaseRepository looks like so :

public async Task<T> Delete(T entity)
{
	//If the type we are trying to delete is auditable, then we don't actually delete it but instead set it to be updated with a delete date. 
	if (typeof(Auditable).IsAssignableFrom(typeof(T)))
	{
		(entity as Auditable).DateDeleted = DateTimeOffset.UtcNow;
		_dbSet.Attach(entity);
		_context.Entry(entity).State = EntityState.Modified;
	}
	else
	{
		_dbSet.Remove(entity);
	}
	return entity;
}

Now your mileage may vary, especially if you are not using the Repository Pattern (Which you should be!). But in short, you must handle soft deletes as updates *instead* of simply calling Remove on the DbSet.

Taking This Further

What’s not shown here is that we can use this same methodology to update many other “automated” fields. We use this same system to track the last user to Create, Update and Delete entities. Once this is up and running, it’s often just a couple more lines to instantly gain traceability across every entity in your database!

ENJOY THIS POST?
Join over 3,000 subscribers who are receiving our weekly post digest, a roundup of this weeks blog posts.
We hate spam. Your email address will not be sold or shared with anyone else.