Pretty much any project that allows file uploads is going to utilize some cloud storage provider. It could be rackspace, it could be GCloud, it could be AWS S3, or in this post’s case, it’s going to be Azure Blob Storage.

Using Blob Storage in .net core really isn’t that different to using it in the full framework, but it’s worth quickly going through it with some quickfire examples.

Getting Required Azure Details

By this stage, you should already have an Azure account set up. Azure has a free tier for almost all products for the first year so you can usually get some small apps up and running completely free of charge. Blob Storage actually doesn’t have a free tier, but if you upload just a single file it’s literally cents to store a GB of data. Crazy. I should also note that if you have a MSDN subscription from your work, then you get $150 a month in Azure credits for the lifetime of your MSDN subscription.

Once you have your account setup, you need to head into the Azure portal and follow the instructions to create a storage account. Once created, inside your storage account settings you need to find your access keys.

The end screen should look like this (Minus the fact I’ve blurred out my keys) :

Write down your storage name and your keys, you will need these later to connect via code to your storage account.

Project/Nuget Setup

For this tutorial I’m just working inside a console application. That’s mostly because it’s easier to try and retry things when we are working on something new. And it means that we don’t have to fiddle around with other ASP.net boilerplate work. It’s just open our main and start writing code.

Inside your project, you need to run the following commands from your Nuget Package Manager.

This is all you need! It should be noted that the same package will be used if you are developing on full framework too.

Getting/Creating A Container

Containers are just like S3 Buckets in Amazon, they are a “bucket” or “collection” to throw your files into. You can create containers via the portal and some might argue this is definitely safer, but where is the fun in that! So let’s get going and create our own container through code.

First, we need to create the object to hold all our storage account details and then create a “client” to do our bidding. The code looks a bit like this :

When we create our storage credentials object, we pass it in the account name and account key we retrieved from the Azure portal. We then create a cloud storage account object – the second param in this constructor is whether we want to use HTTPS. Unless there is some specific reason you don’t… just put true here. And then we go ahead and create our client.

Next we actually want to create our container via code. It looks a bit like this :

That container object is going to be the keys to the world. Almost everything you do in blob storage will be run off that object.

So our full C# code for our console app that creates a container called “mycontainer” in our Azure Cloud Storage account looks like the following :

It’s important to note that every remote action in the storage library is async. Therefore I’ve had to do a quick wrap of an async method in our main entry point to get async working. Obviously when you are working inside ASP.net Core you won’t have this issue (And as a side note, from C# 7.1 you also won’t have an issue as they are adding async entry points to console applications to get around this wrapper).

Running this code, and viewing the overview of the storage account, we can see that our container has been created.

Uploading A File

Uploading a blob is incredibly simple. If you already have the file on disk, you can upload it simply by creating a reference to your blob (That doesn’t exist already), and uploading.

You can also upload direct from a stream (Great if you are doing a direct pass through of a file upload on the web), or even upload a raw string.

The important thing to remember is that in the cloud, a file is just a blob. While you can name your files in the cloud with file extensions, in the cloud it really doesn’t care about mimetypes or anything along those lines. It’s just a bucket to put whatever you want in.

Downloading A File

Downloading a file via C# code is just as easy.

Similar to uploading a file, you can download files to a string, stream or byte array also.

Leasing A File

Leasing is an interesting concept and one that has more uses than you might think. The original intent is that while a user has downloaded a file and is possibly editing it, you can ensure that no other thread/app can access that file for a set amount of time. This amount of time is a max of 1 minute, but the lease can be renewed indefinitely. This makes sense and it means if you are building some complex file management system, it is definitely handy.

But actually leasing has another great use and that is handling race conditions across apps that have been scaled out. You might typically see Redis used for this, but blob storage also works too.

Consider the following code :

When the second AcquireLease runs, an exception is thrown that a lease cannot be obtained – and can’t be obtained for another 30 seconds. A full rundown on using leasing is too much for this post, but it is not too much work to write code that can acquireleases on a blob, and if a lease cannot be obtained, do a spin wait until a lock can be obtained.

AWS Lambda is usually seen as a way for small, short functions to be run in the cloud. Usually small backend processes like resizing images or reading messages from a queue and processing them. But you can actually deploy an entire API on Lambda, and not only that, you can deploy an existing ASP.net Core API with very little extra effort.

It’s not always going to be the right architecture to be running a full API in Lambda, but if your API is being called infrequently and doesn’t need to be chewing resources 24/7, it might be a real cost saver.

Setup

If you haven’t done already, you need to install the AWS Tooling For Visual Studio 2017. Make sure VS2017 is closed while you install this. When opening Visual Studio again, you will see a window asking you to type in a few AWS credentials, follow the instructions to do so. This is mostly for deployment purposes from Visual Studio (Which this tutorial uses), but isn’t strictly required long term if you don’t want Visual Studio having access to your AWS account all the time.

For this tutorial, I’m creating a standard .net core API project in Visual Studio 2017 (The one that has the “valuescontroller” and nothing else). You can either create a new project to follow along or use your existing project, it’s up to you.

With your project open, you need to install the Amazon.Lambda.AspNetCoreServer nuget package. It’s currently in preview only at the moment so the command you need to run from your package manager console is the following :

One of the most frustrating bugs/issues in Visual Studio is that you can’t add CLI Tooling that lives in Nuget via the nuget package manager (See bugs like the following : https://github.com/aspnet/Scaffolding/issues/422). So you need to manually open up your csproj file of your project. In the ItemGroup for your Packages, add the following line :

And in your ItemGroup for DotNetCliTools add the following :

My complete csproj looks like the following if you get lost. This is just the standard .net core api project with the above nuget packages installed.

Code Changes

Add a class in your project named “LambdaFunction”, it should inherit from the abstract class “APIGatewayProxyFunction”. The code of this class should look like :

The biggest thing people ask when they hear about Lambda is “How can I test that locally?”.

Our LambdaFunction above is our entry point for AWSLambda, but you will notice that it’s pretty damn close to how our program.cs file looks in an ASP.net core project. When you run locally, our program.cs bootstraps Kestrel/IIS and starts hosting on our local machine. When you run in AWS Lambda, it doesn’t call Program.cs, it instead calls the code above and bootstraps everything for us, but the underlying code of controllers, services, pipelines etc are all the same. That means for the most part, the site locally and inside Lambda should function more or less identically.

Add a new JSON file to your project and name it “aws-lambda-tools-defaults.json”. This file holds a bunch of defaults when publishing from Visual Studio so you don’t have to type them over and over. Open up the file and enter the following :

The important thing to change here is the “function-handler”. The actual contents of this setting should be “{AssemblyName}::{LambdaFunctionName}::{FunctionHandler}”. In my case I named my project AWSApiExample, so you will need to swap this out for your project’s name. The “FunctionHandlerAsync” part is because our entrypoint that inherits from “APIGatewayProxyFunction” actually implements a methoid called “FunctionHandlerAsync” behind the scenes.

Another quick note is that I’m deploying this to the us-west-2 region, obviously change this if another region suits you better,

Deployment

Deployment could not be simpler! Right click your project and select “Publish To AWS Lambda”.

You should be given a screen to type in a few details, including naming your Function. Fill these out (Most should already be filled out), and be sure to tick the box that says it will save your settings. This actually just saves back to our json file from the last step, so you don’t have to pre-build that JSON file, but it’s much easier to copy and paste it in and just change the few details you need to. Click Next.

For permissions, if you have never used Lambda before (And don’t have a role for it), create a new role with the AWSLambdaFullAccess policy. Everything else on this page just leave default unless you know what you are doing!

Press upload and you should see your app be published up to AWS. Normally this would be the end of a normal lambda function publish, but now we need to setup our API Gateway. The API Gateway in Amazon simply acts as a proxy between the public and your Lambda. It can handle complex scenarios such as authorization, headers inspection etc, but we will actually be using it more or less as a straight pass through.

Open up your AWS Dashboard in your browser and head over to the API Gateway services screen. Go ahead and create a new API, name it whatever you like.

On the next screen, (Resources section of your new api), select the Actions dropdown and hit Create Resource. Here is where we basically create a wildcard pass through to our Lambda function. Tick the box that says “Configure as proxy resource” and leave everything else as is.

The next screen asks what you actually want to proxy to. Select Lambda Function Proxy, and select the region you deployed your Lambda function to. The most infuriating thing is that you have to type your function name here (Why can’t it just be a drop down!). Hopefully you remember what you deployed your function as (If not, in the AWS dashboard quickly pop over to the Lambda section and jog your memory).

Still in the Resources section. Select the Actions dropdown again and select “Deploy API”. A popup will come up, select new stage and type in “prod” as your stage name for now.

After hitting deploy, you will be given an “invoke” URL. I open my url and add on the end /api/values (Since I’m using the default .net Core API template). And what do you know!

So as we see, we can take an entire API and lift and shift it to Lambda. Again, I’m not really sure whether this makes total sense to do and I can’t see it being a hugely popular move, but it can be done (And fairly simple at that!).

In ASP.net core there is this concept of “environments” where you set at the machine level (Or sometimes at the code level) what “environment” you are in. Environments may include development, staging, production or actually any name you want. While you have access to this name via code, the main uses are actually to do configuration swaps at runtime. In full framework you might have previously used web.config transforms to transform the configuration at build time, ASP.net core instead determines it’s configuration at runtime.

It should be noted that there are two uses of the word “environment variables” being used here. The first is that “Environment Variables” are used to describe the actual definition of an environment variable on your machine. The second is that you can add an “environment” to that environment variables list to describe what ASP.net Core environment you are using… That may sound confusing but it will all come together at the end!

Setting Environment Variable In Visual Studio

When developing using Visual Studio, any debugging is done inside an IIS Express instance. You can set Environment variables for debugging by right clicking your project, selecting properties, and then selecting “Debug” on the left hand menu. By default you should see that a variable has been added for you called “ASPNETCORE_ENVIRONMENT” and it’s value should be Development.

You can set this to another value (like “Staging”), when you want to debug using another configuration set. It should be noted that editing this value actually edits your “launchSettings.json” and sets the variable in there. It ends up looking like this :

Setting Environment Variable Machine Wide

You will need to set the ASP.net core environment machine wide if…

  • You are not using Visual Studio/IIS Express
  • You are running the website locally but using full IIS
  • You are running the website on a remote machine for staging/production

To set your entire machine to an environment you should do the following :

For Windows in a Powershell window

For Mac/Linux edit your .bashrc or .bash_profile file and add

It should be noted that setting this up machine wide means that your entire machine (And every project within it) is going to run using that environment variable. This is extremely important if you are currently running multiple websites on a single box for QA/Staging purposes and these each have their own configurations. Setting the environment machine wide may cause issues because each site will think they are in the same environment, if this is you, read the next section!

Setting Environment Variable via Code

You can go through mountains of official ASP.net core documentation and miss the fact that you can set the ASP.net Core Environment through code. For some reason it’s a little known feature that makes the use case of multiple websites with different environments live on the same machine.

In your ASP.net core site, you should have a program.cs. Opening this you will see that you have a WebHostBuilder object that gets built up. You can actually add a little known command called “UseEnvironment” like so :

Obviously as a constant string it’s probably not that helpful, but this is regular old C# code so how you find that string is up to you. You could load a text file for example and from there decide which environment to use. Any logic is possible!

Accessing Environment via Code

To access the environment in your code (Such as a controller), you just have to inject IHostingEnvironment into your controller. The interface is already setup by calling “AddMvc” in your ConfigureServices method so you don’t need to add the interface to your service collection manually.

Here is some example code that returns the environment I am currently in :

Swapping Config Based Off Environment

In your startup.cs there is a method called “Startup” that builds your configuration. By default it adds a json file called “appsettings.json” but it can be used to add a file called “appsettings.{environment}.json where environment is swapped out for the environment you are currently running in. The code looks like this :

Any settings in your base appsettings.json are overwritten using your environment specific file.

Configuration itself is a massive topic and Microsoft has already done an incredible job of it’s documentation so head over and read it here.

Naming Conventions Per Environment

Again, another little known “feature” of using environments is that naming conventions can be used to completely change how your application starts. By default, you should have a file called startup.cs that contains how your app is configured, how services are built etc. But you can actually create a file/class called “startup{environmentname}.cs” and it will be run instead of your default if it matches the environment name your app is currently running in.

Alternatively, if you stick with a single startup.cs file, the methods Configure and ConfigureServices can be changed to Configure{EnvironmentName} and ConfigureServices{EnvironmentName} to write completely different pipelines or services. It’s not going to be that handy on larger projects because it doesn’t call both the default and the environment specific methods, it only calls one which could lead to a lot of copy and pasting. But it’s handy to know!

Google have recently released support for ASP.net core on their “GCloud” hosting platform. And along with it released some documentation on how to get up and running here. The documentation itself uses their container engine, but you can actually get up and going on GCloud a lot faster using a couple of command line tools (Very similar to how you might get going on Azure). So for this tutorial instead we will try and get up and running much faster, and then you can transition to container hosting if it fits your needs better.

Something super important is that at the time of writing, GCloud only supports .net core 1.0 (Not 1.1). Ensure that your apps are running on .net core 1.0 before attempting to deploy them!

Note : This tutorial uses Windows Powershell a lot. If you are on Mac and Linux that could be an issue! But the commands will be very very similar if not exactly the same in your own native command line. 

Setting Up GCloud

First install the GCloud SDK from here https://cloud.google.com/sdk/. This gives you a few extra powershell/cmd commands to be able to deploy straight from your desktop (And obviously in the future from your build server). Be sure to install the “beta” components along the way too (You will see this inside the installer). More than likely after installing you will have to restart your machine, if the powershell commands don’t work off the bat you might want to first try rebooting (It is Windows after all).

If you haven’t already head to Google Cloud at https://cloud.google.com/ and sign up. It’s free! You can read more about the limits of the free tier here. But for the purpose of this tutorial there is no way you are going to blow through the free hosting.

Once you are signed up and in, create your first project on the dashboard. Name it whatever you want – I went with “My First Project”.

Now, here it gets a little difficult but you should be able to handle it! Open a powershell prompt and type :

A browser window should open instantly on your machine asking you to login to your Google cloud account and give permission to access your GCloud account. After following the prompts your powershell window should have something similar to the following in it :

Great! You are now logged into your account. On your GCloud dashboard in your browser, you should see something similar to :

Take the ProjectID and type the following into your Powershell window

Now we are set up to deploy from Powershell!

Setting Up Your ASP.net Core Project

For this tutorial I’m just using a basic hello world app (The default empty project from Visual Studio), but it really shouldn’t matter what your app is doing.

Add a file to your project called “app.yaml”. This is information for GCloud to work out how your app should be run. Inside the app.yaml file put the following :

After creating this file you need to set it to copy to the output directory. If you are on project.json this means opening up your project.json file, and finding the publishOptions node. It should end up looking like this :

If you are on the latest version of .net core tooling (e.g. csproj), then you need to set the file to copy always in Visual Studio by right clicking the file, selecting properties, and then changing the drop down to Copy Always.

Again, another reminder that GCloud only supports .net core 1.0 (Not 1.1). This means that you must target 1.0 and not 1.1. Unfortunately switching around between .net core versions is another kettle of fish that deserves it’s own post really. Have a quick google around for how to switch around .net core targeting versions if you are not already using 1.0.

Deployment

Now onto deployment. For my test application I’m going to do a simple dotnet publish, but you may have something more complicated going on so you might want to read the docs here. But for now, let’s just run the following inside our project directory from Powershell.

Take your powershell and move into your release directory. The path will be something like “bin\Release\netcoreapp1.0\publish”.

Run the following command from powershell :

This is basically the entire deploy process. You are telling GCloud to take everything inside that folder (Along with the app.yaml file describing what it is), and push it to your logged in project.

If this is your first time deploying, you will be asked which region you wish to deploy to, and then it will take a good 5 – 10 minutes for the deployment to complete. Just hang in there! Once complete, you should be able to type the following powershell command which will open your newly deployed app in your browser!

Moving Forward

The first thing you should learn is how to switch off your running instance. It’s sort of buried a bit behind layers of menus, but this link should take you directly there : https://console.cloud.google.com/appengine/versions. Select the latest version of your app (If you’ve only deployed once, then there will only be one), and hit stop up top.

Before you switch it off though, take a look around the portal and check out some of the awesome features like logging (Direct Kestrel logging which is pretty nifty), and monitoring. Both of which look very very good.

Drop a comment below with your experience getting up and running on GCloud!