One of the easiest ways to publish your application to Azure App Service is straight from Visual Studio. While not an ideal solution long term or for any team over the size of one, it’s a great way for a solo developer to quickly get something up and running. Most of all, it doesn’t rely on any special configuration, scripts or external services.

Publish via Login

If the Azure account you are publishing to is the same one you use for your Visual Studio account or you have the username/password combination of the Azure account you are publishing to, you can publish direct from Visual Studio.

Inside solution explorer, right click your project and select “Publish”. You want to select “Azure App Service” on the next screen and “Select Existing”. The “Create New” option can be used if you don’t already have an App Service instance already (Rather than creating via the portal) – but we won’t do this this time around.

Click “Publish”. On the next screen you can select your Account, App Service and Deployment Slot to push to. Click “OK” and… that’s it. Really. You should see your project start building, publishing (via Web Deploy), and then Visual Studio will open your browser at the App Service URL.

Publish via Publish Profile

A publish profile is a settings file that holds all information for deploying your code direct to Visual Studio. The only real difference is that the publish profile can be given to you without you having any Azure login credentials yourself. Other than that, the actual deployment is largely the same.

To get a publish profile from the Azure Portal, head over to the dashboard of your App Service. Along the top you should see an option to “Get Publish Profile”.

With this file you are now able to deploy direct from Visual Studio. It’s important that this file is not just left lying around somewhere. With it, anyone can push code to your app service. So while it’s not quite the keys to the kingdom as much as a user/pass combination is, it’s pretty damn close.

Inside Solution Explorer, Right click your project, select Publish. Select “Import Profile” (You may have to scroll to the right to see this option).

Hit Publish! Immediately your project will build, publish and then open your browser at the App Service URL.

Obvious Issues

I touched on it earlier, but I feel the need to point it out further. Publishing from Visual Studio is not a long term strategy. Publishing from Visual Studio cuts out any CI/CD process you have running (If any), it creates possible inconsistencies with what you are deploying (Think “Did you get latest before you published?) and it creates a single point of failure of one developer doing all deployments in teams of more than 1.

That last point is important, while documentation can get some way around this, a single developer being the “release” guy usually creates some “magic” process where only that person can push code.

Again, with a single developer, especially when it’s just a hobby project, it’s less of an issue.

 

Deploying to the Azure App Service through Github is a simple but effective way to deploy websites that don’t require too much ceremony. Because many projects use Github for hosting their Git repositories, it’s a pretty seamless deployment process that is more around point and clicking in the Azure portal rather than having to write complicated deployment scripts.

It should be noted while this post references Github throughout, the exact same process can be used to setup Bitbucket to automatically deploy on a push.

Setting Up Github Deployment In Azure Portal

Setting up a Github Deployment for an Azure Web App Service is nothing more than a point and click process. Inside your App Service on the Azure Portal, select Deployment Options from under the Deployment category.

Select “Setup” on the online menu, and then select your source. In our case either Github or Bitbucket. You should now be able to click “Authorize” and use OAuth to connect your Github/Bitbucket account to Azure. Note that this account will be available for all other Azure App Services, not just this one.

Select your project and branch from the options. Typically your branch can just be master, but if this is a development instance you may want to use your develop branch, or you can even select a particular release branch if you are just using this deployment method as an easy way to push code rather than a continuous deployment strategy.

After connecting, wait 5 minutes for your project to sync up. Note that sometimes you need to go to a different blade in the portal and then head back to see updates (Not the greatest, but it works eventually!)

A great feature is that the deployment engine behind this will only sync files that are changed (Since it can see what has changed inside GIT). This makes deployments much faster than a FTP transfer or similar because it’s only uploading what it needs.

Any pushes to your selected branch will be deployed automatically within seconds, but if you get impatient, you can click the Sync button inside the Deployment Options blade and you should get a deployment going within seconds.

Rollback A Deployment

The Github CD Process allows you to roll back any deployment right from inside the Azure portal. Head to the Deployment Options screen inside your Azure App Service and select the deployment you wish to roll back to.

Select the option to “Redeploy” along the top menu and your previous deployment will be redeployed to production.

Linux App Service

If you are attempting to use Azure App Service with Linux (And now is a good time to be doing so, it’s 50% off!), unfortunately I couldn’t get Continuous Deployment working. Using the exact same Github repository but with a Linux App Service, the deployment failed. I have a sneaky suspicion that is related to the Docker Settings on App Service, but I wasn’t able to get things up and running. Feel free to drop a comment below if you have got it up and running!

With Azure App Service currently being 50% off for Linux, people are taking advantage and halving their .NET core hosting bill. And why wouldn’t you!? Because Azure App Service is fully PaaS, the underlying OS doesn’t make too much of a difference both from a deployment and management perspective.

Most deployment guides floating around the web center on deploying to a Windows App Service, which really isn’t too different to the Linux version, except for one important step. If you have deployed your ASP.net Core code and are not seeing anything when you visit your website in a browser, read on.

Docker Startup File

Linux App Service actually runs Docker under the hood. To get everything starting correctly you actually need to set a “startup file”. This is essentially the entry point for your app.

Inside the Azure Portal, inside your App Service, under the Settings category, find the option for “Docker Container”. Here is where you can set the runtime stack, but more importantly, your startup details.

For a dotnet project, the startup file line should read :  dotnet /home/site/wwwroot/{yourdllhere}

Where yourdllhere is the DLL of your main web project in your deployment. In practice it should end up looking a bit like this :

Once saved, you should head back to the app service dashboard and restart your service. It does take a while to boot up first time, so wait 5 minutes and give it a spin!

A quick and easy way to get your code up and running on Azure App Service is to upload your code using FTP. FTP can be as simple as a drag and drop process from your desktop, or as complicated as having your CI/CD pipeline run an FTP client for you. While it’s not an ideal scenario for large scale deployments due to the individual uploading of files one at a time, it’s a good starting point if you are just looking to have a play with Azure without too much fuss.

Setting Up FTP In Azure Portal

After creating your App Service, FTP may not be immediately available if you have never used it to deploy applications before. Deployment credentials, while they are viewable under a particular app service plan, are actually for the entire Azure account. So if you have *ever* used FTP or GIT to deploy apps before, the credentials will already be set and you should use those same ones. Resetting the credentials, even if you only do it within one app service, will reset them account wide.

Thankfully this is easy to check. Head over to your app service overview screen, if on this page you have an FTP deployment username showing, then you or someone else has already set up credentials.

If you don’t see anything here, then you will need to scroll down to the Deployment section, you should see an option for “Deployment credentials”.

Enter in a username/password combination you would like to use for FTP. Again, I cannot stress enough that this will be used on all FTP deployments for the entire account, not just this one app service plan.

Publishing Your ASP.net Core Web Project

Because App Service in Azure has a runtime installed, publishing your app is a one line job.

Open a command prompt in your project directory and run the following :

Heading to “bin\Release” you should see your published project for your .net core runtime (netcore1.0 or netcore1.1).

Uploading Your ASP.net Site

Back on your App Service overview, you should have a FTP hostname to connect to using your favourite client (I use Filezilla). After connecting, make your way to /site/wwwroot.

Upload your entire published app to the wwwroot.

Docker Settings For Linux App Services

Linux App Service plans are currently 50% of the Windows price right now, and given it’s a PaaS service, it really doesn’t make a huge difference what the underlying OS is. If you are using Linux, you will need one extra step to get going.

On the app service menu, under the Settings category, there should be an option for “Docker Container”. Here you can select your runtime stack but more importantly, you set the “entry” DLL for dotnet. Set the startup file to “dotnet /home/site/wwwroot/{yourdllhere}”.

Head back to the app service dashboard and restart the service. Wait 5 minutes and hit your URL and you should be up and running!

Pretty much any project that allows file uploads is going to utilize some cloud storage provider. It could be rackspace, it could be GCloud, it could be AWS S3, or in this post’s case, it’s going to be Azure Blob Storage.

Using Blob Storage in .net core really isn’t that different to using it in the full framework, but it’s worth quickly going through it with some quickfire examples.

Getting Required Azure Details

By this stage, you should already have an Azure account set up. Azure has a free tier for almost all products for the first year so you can usually get some small apps up and running completely free of charge. Blob Storage actually doesn’t have a free tier, but if you upload just a single file it’s literally cents to store a GB of data. Crazy. I should also note that if you have a MSDN subscription from your work, then you get $150 a month in Azure credits for the lifetime of your MSDN subscription.

Once you have your account setup, you need to head into the Azure portal and follow the instructions to create a storage account. Once created, inside your storage account settings you need to find your access keys.

The end screen should look like this (Minus the fact I’ve blurred out my keys) :

Write down your storage name and your keys, you will need these later to connect via code to your storage account.

Project/Nuget Setup

For this tutorial I’m just working inside a console application. That’s mostly because it’s easier to try and retry things when we are working on something new. And it means that we don’t have to fiddle around with other ASP.net boilerplate work. It’s just open our main and start writing code.

Inside your project, you need to run the following commands from your Nuget Package Manager.

This is all you need! It should be noted that the same package will be used if you are developing on full framework too.

Getting/Creating A Container

Containers are just like S3 Buckets in Amazon, they are a “bucket” or “collection” to throw your files into. You can create containers via the portal and some might argue this is definitely safer, but where is the fun in that! So let’s get going and create our own container through code.

First, we need to create the object to hold all our storage account details and then create a “client” to do our bidding. The code looks a bit like this :

When we create our storage credentials object, we pass it in the account name and account key we retrieved from the Azure portal. We then create a cloud storage account object – the second param in this constructor is whether we want to use HTTPS. Unless there is some specific reason you don’t… just put true here. And then we go ahead and create our client.

Next we actually want to create our container via code. It looks a bit like this :

That container object is going to be the keys to the world. Almost everything you do in blob storage will be run off that object.

So our full C# code for our console app that creates a container called “mycontainer” in our Azure Cloud Storage account looks like the following :

It’s important to note that every remote action in the storage library is async. Therefore I’ve had to do a quick wrap of an async method in our main entry point to get async working. Obviously when you are working inside ASP.net Core you won’t have this issue (And as a side note, from C# 7.1 you also won’t have an issue as they are adding async entry points to console applications to get around this wrapper).

Running this code, and viewing the overview of the storage account, we can see that our container has been created.

Uploading A File

Uploading a blob is incredibly simple. If you already have the file on disk, you can upload it simply by creating a reference to your blob (That doesn’t exist already), and uploading.

You can also upload direct from a stream (Great if you are doing a direct pass through of a file upload on the web), or even upload a raw string.

The important thing to remember is that in the cloud, a file is just a blob. While you can name your files in the cloud with file extensions, in the cloud it really doesn’t care about mimetypes or anything along those lines. It’s just a bucket to put whatever you want in.

Downloading A File

Downloading a file via C# code is just as easy.

Similar to uploading a file, you can download files to a string, stream or byte array also.

Leasing A File

Leasing is an interesting concept and one that has more uses than you might think. The original intent is that while a user has downloaded a file and is possibly editing it, you can ensure that no other thread/app can access that file for a set amount of time. This amount of time is a max of 1 minute, but the lease can be renewed indefinitely. This makes sense and it means if you are building some complex file management system, it is definitely handy.

But actually leasing has another great use and that is handling race conditions across apps that have been scaled out. You might typically see Redis used for this, but blob storage also works too.

Consider the following code :

When the second AcquireLease runs, an exception is thrown that a lease cannot be obtained – and can’t be obtained for another 30 seconds. A full rundown on using leasing is too much for this post, but it is not too much work to write code that can acquireleases on a blob, and if a lease cannot be obtained, do a spin wait until a lock can be obtained.

AWS Lambda is usually seen as a way for small, short functions to be run in the cloud. Usually small backend processes like resizing images or reading messages from a queue and processing them. But you can actually deploy an entire API on Lambda, and not only that, you can deploy an existing ASP.net Core API with very little extra effort.

It’s not always going to be the right architecture to be running a full API in Lambda, but if your API is being called infrequently and doesn’t need to be chewing resources 24/7, it might be a real cost saver.

Setup

If you haven’t done already, you need to install the AWS Tooling For Visual Studio 2017. Make sure VS2017 is closed while you install this. When opening Visual Studio again, you will see a window asking you to type in a few AWS credentials, follow the instructions to do so. This is mostly for deployment purposes from Visual Studio (Which this tutorial uses), but isn’t strictly required long term if you don’t want Visual Studio having access to your AWS account all the time.

For this tutorial, I’m creating a standard .net core API project in Visual Studio 2017 (The one that has the “valuescontroller” and nothing else). You can either create a new project to follow along or use your existing project, it’s up to you.

With your project open, you need to install the Amazon.Lambda.AspNetCoreServer nuget package. It’s currently in preview only at the moment so the command you need to run from your package manager console is the following :

One of the most frustrating bugs/issues in Visual Studio is that you can’t add CLI Tooling that lives in Nuget via the nuget package manager (See bugs like the following : https://github.com/aspnet/Scaffolding/issues/422). So you need to manually open up your csproj file of your project. In the ItemGroup for your Packages, add the following line :

And in your ItemGroup for DotNetCliTools add the following :

My complete csproj looks like the following if you get lost. This is just the standard .net core api project with the above nuget packages installed.

Code Changes

Add a class in your project named “LambdaFunction”, it should inherit from the abstract class “APIGatewayProxyFunction”. The code of this class should look like :

The biggest thing people ask when they hear about Lambda is “How can I test that locally?”.

Our LambdaFunction above is our entry point for AWSLambda, but you will notice that it’s pretty damn close to how our program.cs file looks in an ASP.net core project. When you run locally, our program.cs bootstraps Kestrel/IIS and starts hosting on our local machine. When you run in AWS Lambda, it doesn’t call Program.cs, it instead calls the code above and bootstraps everything for us, but the underlying code of controllers, services, pipelines etc are all the same. That means for the most part, the site locally and inside Lambda should function more or less identically.

Add a new JSON file to your project and name it “aws-lambda-tools-defaults.json”. This file holds a bunch of defaults when publishing from Visual Studio so you don’t have to type them over and over. Open up the file and enter the following :

The important thing to change here is the “function-handler”. The actual contents of this setting should be “{AssemblyName}::{LambdaFunctionName}::{FunctionHandler}”. In my case I named my project AWSApiExample, so you will need to swap this out for your project’s name. The “FunctionHandlerAsync” part is because our entrypoint that inherits from “APIGatewayProxyFunction” actually implements a methoid called “FunctionHandlerAsync” behind the scenes.

Another quick note is that I’m deploying this to the us-west-2 region, obviously change this if another region suits you better,

Deployment

Deployment could not be simpler! Right click your project and select “Publish To AWS Lambda”.

You should be given a screen to type in a few details, including naming your Function. Fill these out (Most should already be filled out), and be sure to tick the box that says it will save your settings. This actually just saves back to our json file from the last step, so you don’t have to pre-build that JSON file, but it’s much easier to copy and paste it in and just change the few details you need to. Click Next.

For permissions, if you have never used Lambda before (And don’t have a role for it), create a new role with the AWSLambdaFullAccess policy. Everything else on this page just leave default unless you know what you are doing!

Press upload and you should see your app be published up to AWS. Normally this would be the end of a normal lambda function publish, but now we need to setup our API Gateway. The API Gateway in Amazon simply acts as a proxy between the public and your Lambda. It can handle complex scenarios such as authorization, headers inspection etc, but we will actually be using it more or less as a straight pass through.

Open up your AWS Dashboard in your browser and head over to the API Gateway services screen. Go ahead and create a new API, name it whatever you like.

On the next screen, (Resources section of your new api), select the Actions dropdown and hit Create Resource. Here is where we basically create a wildcard pass through to our Lambda function. Tick the box that says “Configure as proxy resource” and leave everything else as is.

The next screen asks what you actually want to proxy to. Select Lambda Function Proxy, and select the region you deployed your Lambda function to. The most infuriating thing is that you have to type your function name here (Why can’t it just be a drop down!). Hopefully you remember what you deployed your function as (If not, in the AWS dashboard quickly pop over to the Lambda section and jog your memory).

Still in the Resources section. Select the Actions dropdown again and select “Deploy API”. A popup will come up, select new stage and type in “prod” as your stage name for now.

After hitting deploy, you will be given an “invoke” URL. I open my url and add on the end /api/values (Since I’m using the default .net Core API template). And what do you know!

So as we see, we can take an entire API and lift and shift it to Lambda. Again, I’m not really sure whether this makes total sense to do and I can’t see it being a hugely popular move, but it can be done (And fairly simple at that!).

In ASP.net core there is this concept of “environments” where you set at the machine level (Or sometimes at the code level) what “environment” you are in. Environments may include development, staging, production or actually any name you want. While you have access to this name via code, the main uses are actually to do configuration swaps at runtime. In full framework you might have previously used web.config transforms to transform the configuration at build time, ASP.net core instead determines it’s configuration at runtime.

It should be noted that there are two uses of the word “environment variables” being used here. The first is that “Environment Variables” are used to describe the actual definition of an environment variable on your machine. The second is that you can add an “environment” to that environment variables list to describe what ASP.net Core environment you are using… That may sound confusing but it will all come together at the end!

Setting Environment Variable In Visual Studio

When developing using Visual Studio, any debugging is done inside an IIS Express instance. You can set Environment variables for debugging by right clicking your project, selecting properties, and then selecting “Debug” on the left hand menu. By default you should see that a variable has been added for you called “ASPNETCORE_ENVIRONMENT” and it’s value should be Development.

You can set this to another value (like “Staging”), when you want to debug using another configuration set. It should be noted that editing this value actually edits your “launchSettings.json” and sets the variable in there. It ends up looking like this :

Setting Environment Variable Machine Wide

You will need to set the ASP.net core environment machine wide if…

  • You are not using Visual Studio/IIS Express
  • You are running the website locally but using full IIS
  • You are running the website on a remote machine for staging/production

To set your entire machine to an environment you should do the following :

For Windows in a Powershell window

For Mac/Linux edit your .bashrc or .bash_profile file and add

It should be noted that setting this up machine wide means that your entire machine (And every project within it) is going to run using that environment variable. This is extremely important if you are currently running multiple websites on a single box for QA/Staging purposes and these each have their own configurations. Setting the environment machine wide may cause issues because each site will think they are in the same environment, if this is you, read the next section!

Setting Environment Variable via Code

You can go through mountains of official ASP.net core documentation and miss the fact that you can set the ASP.net Core Environment through code. For some reason it’s a little known feature that makes the use case of multiple websites with different environments live on the same machine.

In your ASP.net core site, you should have a program.cs. Opening this you will see that you have a WebHostBuilder object that gets built up. You can actually add a little known command called “UseEnvironment” like so :

Obviously as a constant string it’s probably not that helpful, but this is regular old C# code so how you find that string is up to you. You could load a text file for example and from there decide which environment to use. Any logic is possible!

Accessing Environment via Code

To access the environment in your code (Such as a controller), you just have to inject IHostingEnvironment into your controller. The interface is already setup by calling “AddMvc” in your ConfigureServices method so you don’t need to add the interface to your service collection manually.

Here is some example code that returns the environment I am currently in :

Swapping Config Based Off Environment

In your startup.cs there is a method called “Startup” that builds your configuration. By default it adds a json file called “appsettings.json” but it can be used to add a file called “appsettings.{environment}.json where environment is swapped out for the environment you are currently running in. The code looks like this :

Any settings in your base appsettings.json are overwritten using your environment specific file.

Configuration itself is a massive topic and Microsoft has already done an incredible job of it’s documentation so head over and read it here.

Naming Conventions Per Environment

Again, another little known “feature” of using environments is that naming conventions can be used to completely change how your application starts. By default, you should have a file called startup.cs that contains how your app is configured, how services are built etc. But you can actually create a file/class called “startup{environmentname}.cs” and it will be run instead of your default if it matches the environment name your app is currently running in.

Alternatively, if you stick with a single startup.cs file, the methods Configure and ConfigureServices can be changed to Configure{EnvironmentName} and ConfigureServices{EnvironmentName} to write completely different pipelines or services. It’s not going to be that handy on larger projects because it doesn’t call both the default and the environment specific methods, it only calls one which could lead to a lot of copy and pasting. But it’s handy to know!

Google have recently released support for ASP.net core on their “GCloud” hosting platform. And along with it released some documentation on how to get up and running here. The documentation itself uses their container engine, but you can actually get up and going on GCloud a lot faster using a couple of command line tools (Very similar to how you might get going on Azure). So for this tutorial instead we will try and get up and running much faster, and then you can transition to container hosting if it fits your needs better.

Something super important is that at the time of writing, GCloud only supports .net core 1.0 (Not 1.1). Ensure that your apps are running on .net core 1.0 before attempting to deploy them!

Note : This tutorial uses Windows Powershell a lot. If you are on Mac and Linux that could be an issue! But the commands will be very very similar if not exactly the same in your own native command line. 

Setting Up GCloud

First install the GCloud SDK from here https://cloud.google.com/sdk/. This gives you a few extra powershell/cmd commands to be able to deploy straight from your desktop (And obviously in the future from your build server). Be sure to install the “beta” components along the way too (You will see this inside the installer). More than likely after installing you will have to restart your machine, if the powershell commands don’t work off the bat you might want to first try rebooting (It is Windows after all).

If you haven’t already head to Google Cloud at https://cloud.google.com/ and sign up. It’s free! You can read more about the limits of the free tier here. But for the purpose of this tutorial there is no way you are going to blow through the free hosting.

Once you are signed up and in, create your first project on the dashboard. Name it whatever you want – I went with “My First Project”.

Now, here it gets a little difficult but you should be able to handle it! Open a powershell prompt and type :

A browser window should open instantly on your machine asking you to login to your Google cloud account and give permission to access your GCloud account. After following the prompts your powershell window should have something similar to the following in it :

Great! You are now logged into your account. On your GCloud dashboard in your browser, you should see something similar to :

Take the ProjectID and type the following into your Powershell window

Now we are set up to deploy from Powershell!

Setting Up Your ASP.net Core Project

For this tutorial I’m just using a basic hello world app (The default empty project from Visual Studio), but it really shouldn’t matter what your app is doing.

Add a file to your project called “app.yaml”. This is information for GCloud to work out how your app should be run. Inside the app.yaml file put the following :

After creating this file you need to set it to copy to the output directory. If you are on project.json this means opening up your project.json file, and finding the publishOptions node. It should end up looking like this :

If you are on the latest version of .net core tooling (e.g. csproj), then you need to set the file to copy always in Visual Studio by right clicking the file, selecting properties, and then changing the drop down to Copy Always.

Again, another reminder that GCloud only supports .net core 1.0 (Not 1.1). This means that you must target 1.0 and not 1.1. Unfortunately switching around between .net core versions is another kettle of fish that deserves it’s own post really. Have a quick google around for how to switch around .net core targeting versions if you are not already using 1.0.

Deployment

Now onto deployment. For my test application I’m going to do a simple dotnet publish, but you may have something more complicated going on so you might want to read the docs here. But for now, let’s just run the following inside our project directory from Powershell.

Take your powershell and move into your release directory. The path will be something like “bin\Release\netcoreapp1.0\publish”.

Run the following command from powershell :

This is basically the entire deploy process. You are telling GCloud to take everything inside that folder (Along with the app.yaml file describing what it is), and push it to your logged in project.

If this is your first time deploying, you will be asked which region you wish to deploy to, and then it will take a good 5 – 10 minutes for the deployment to complete. Just hang in there! Once complete, you should be able to type the following powershell command which will open your newly deployed app in your browser!

Moving Forward

The first thing you should learn is how to switch off your running instance. It’s sort of buried a bit behind layers of menus, but this link should take you directly there : https://console.cloud.google.com/appengine/versions. Select the latest version of your app (If you’ve only deployed once, then there will only be one), and hit stop up top.

Before you switch it off though, take a look around the portal and check out some of the awesome features like logging (Direct Kestrel logging which is pretty nifty), and monitoring. Both of which look very very good.

Drop a comment below with your experience getting up and running on GCloud!