One of the easiest ways to publish your application to Azure App Service is straight from Visual Studio. While not an ideal solution long term or for any team over the size of one, it’s a great way for a solo developer to quickly get something up and running. Most of all, it doesn’t rely on any special configuration, scripts or external services.

Publish via Login

If the Azure account you are publishing to is the same one you use for your Visual Studio account or you have the username/password combination of the Azure account you are publishing to, you can publish direct from Visual Studio.

Inside solution explorer, right click your project and select “Publish”. You want to select “Azure App Service” on the next screen and “Select Existing”. The “Create New” option can be used if you don’t already have an App Service instance already (Rather than creating via the portal) – but we won’t do this this time around.

Click “Publish”. On the next screen you can select your Account, App Service and Deployment Slot to push to. Click “OK” and… that’s it. Really. You should see your project start building, publishing (via Web Deploy), and then Visual Studio will open your browser at the App Service URL.

Publish via Publish Profile

A publish profile is a settings file that holds all information for deploying your code direct to Visual Studio. The only real difference is that the publish profile can be given to you without you having any Azure login credentials yourself. Other than that, the actual deployment is largely the same.

To get a publish profile from the Azure Portal, head over to the dashboard of your App Service. Along the top you should see an option to “Get Publish Profile”.

With this file you are now able to deploy direct from Visual Studio. It’s important that this file is not just left lying around somewhere. With it, anyone can push code to your app service. So while it’s not quite the keys to the kingdom as much as a user/pass combination is, it’s pretty damn close.

Inside Solution Explorer, Right click your project, select Publish. Select “Import Profile” (You may have to scroll to the right to see this option).

Hit Publish! Immediately your project will build, publish and then open your browser at the App Service URL.

Obvious Issues

I touched on it earlier, but I feel the need to point it out further. Publishing from Visual Studio is not a long term strategy. Publishing from Visual Studio cuts out any CI/CD process you have running (If any), it creates possible inconsistencies with what you are deploying (Think “Did you get latest before you published?) and it creates a single point of failure of one developer doing all deployments in teams of more than 1.

That last point is important, while documentation can get some way around this, a single developer being the “release” guy usually creates some “magic” process where only that person can push code.

Again, with a single developer, especially when it’s just a hobby project, it’s less of an issue.

 

Deploying to the Azure App Service through Github is a simple but effective way to deploy websites that don’t require too much ceremony. Because many projects use Github for hosting their Git repositories, it’s a pretty seamless deployment process that is more around point and clicking in the Azure portal rather than having to write complicated deployment scripts.

It should be noted while this post references Github throughout, the exact same process can be used to setup Bitbucket to automatically deploy on a push.

Setting Up Github Deployment In Azure Portal

Setting up a Github Deployment for an Azure Web App Service is nothing more than a point and click process. Inside your App Service on the Azure Portal, select Deployment Options from under the Deployment category.

Select “Setup” on the online menu, and then select your source. In our case either Github or Bitbucket. You should now be able to click “Authorize” and use OAuth to connect your Github/Bitbucket account to Azure. Note that this account will be available for all other Azure App Services, not just this one.

Select your project and branch from the options. Typically your branch can just be master, but if this is a development instance you may want to use your develop branch, or you can even select a particular release branch if you are just using this deployment method as an easy way to push code rather than a continuous deployment strategy.

After connecting, wait 5 minutes for your project to sync up. Note that sometimes you need to go to a different blade in the portal and then head back to see updates (Not the greatest, but it works eventually!)

A great feature is that the deployment engine behind this will only sync files that are changed (Since it can see what has changed inside GIT). This makes deployments much faster than a FTP transfer or similar because it’s only uploading what it needs.

Any pushes to your selected branch will be deployed automatically within seconds, but if you get impatient, you can click the Sync button inside the Deployment Options blade and you should get a deployment going within seconds.

Rollback A Deployment

The Github CD Process allows you to roll back any deployment right from inside the Azure portal. Head to the Deployment Options screen inside your Azure App Service and select the deployment you wish to roll back to.

Select the option to “Redeploy” along the top menu and your previous deployment will be redeployed to production.

Linux App Service

If you are attempting to use Azure App Service with Linux (And now is a good time to be doing so, it’s 50% off!), unfortunately I couldn’t get Continuous Deployment working. Using the exact same Github repository but with a Linux App Service, the deployment failed. I have a sneaky suspicion that is related to the Docker Settings on App Service, but I wasn’t able to get things up and running. Feel free to drop a comment below if you have got it up and running!

With Azure App Service currently being 50% off for Linux, people are taking advantage and halving their .NET core hosting bill. And why wouldn’t you!? Because Azure App Service is fully PaaS, the underlying OS doesn’t make too much of a difference both from a deployment and management perspective.

Most deployment guides floating around the web center on deploying to a Windows App Service, which really isn’t too different to the Linux version, except for one important step. If you have deployed your ASP.net Core code and are not seeing anything when you visit your website in a browser, read on.

Docker Startup File

Linux App Service actually runs Docker under the hood. To get everything starting correctly you actually need to set a “startup file”. This is essentially the entry point for your app.

Inside the Azure Portal, inside your App Service, under the Settings category, find the option for “Docker Container”. Here is where you can set the runtime stack, but more importantly, your startup details.

For a dotnet project, the startup file line should read :  dotnet /home/site/wwwroot/{yourdllhere}

Where yourdllhere is the DLL of your main web project in your deployment. In practice it should end up looking a bit like this :

Once saved, you should head back to the app service dashboard and restart your service. It does take a while to boot up first time, so wait 5 minutes and give it a spin!

A quick and easy way to get your code up and running on Azure App Service is to upload your code using FTP. FTP can be as simple as a drag and drop process from your desktop, or as complicated as having your CI/CD pipeline run an FTP client for you. While it’s not an ideal scenario for large scale deployments due to the individual uploading of files one at a time, it’s a good starting point if you are just looking to have a play with Azure without too much fuss.

Setting Up FTP In Azure Portal

After creating your App Service, FTP may not be immediately available if you have never used it to deploy applications before. Deployment credentials, while they are viewable under a particular app service plan, are actually for the entire Azure account. So if you have *ever* used FTP or GIT to deploy apps before, the credentials will already be set and you should use those same ones. Resetting the credentials, even if you only do it within one app service, will reset them account wide.

Thankfully this is easy to check. Head over to your app service overview screen, if on this page you have an FTP deployment username showing, then you or someone else has already set up credentials.

If you don’t see anything here, then you will need to scroll down to the Deployment section, you should see an option for “Deployment credentials”.

Enter in a username/password combination you would like to use for FTP. Again, I cannot stress enough that this will be used on all FTP deployments for the entire account, not just this one app service plan.

Publishing Your ASP.net Core Web Project

Because App Service in Azure has a runtime installed, publishing your app is a one line job.

Open a command prompt in your project directory and run the following :

Heading to “bin\Release” you should see your published project for your .net core runtime (netcore1.0 or netcore1.1).

Uploading Your ASP.net Site

Back on your App Service overview, you should have a FTP hostname to connect to using your favourite client (I use Filezilla). After connecting, make your way to /site/wwwroot.

Upload your entire published app to the wwwroot.

Docker Settings For Linux App Services

Linux App Service plans are currently 50% of the Windows price right now, and given it’s a PaaS service, it really doesn’t make a huge difference what the underlying OS is. If you are using Linux, you will need one extra step to get going.

On the app service menu, under the Settings category, there should be an option for “Docker Container”. Here you can select your runtime stack but more importantly, you set the “entry” DLL for dotnet. Set the startup file to “dotnet /home/site/wwwroot/{yourdllhere}”.

Head back to the app service dashboard and restart the service. Wait 5 minutes and hit your URL and you should be up and running!

Route Constraints can be a handy way to distinguish between similar route names, and in some cases, pre-filter out “junk” requests from actually hitting your actions and taking up resources. A route constraint can be as simple as enforcing that an ID that you expect in a URL is an integer, or as complicated as regex matching on strings.

An important thing to remember is that route constraints are not a way to “validate” input. Any server side validation you wish to occur should still happen regardless of any route constraints set up. Importantly, know that if a route constraint is not met than a 404 is returned, rather than a 400 bad request you would typically expect to see from a validation failure.

Type Constraints

Type constraints are a simple way to ensure that a parameter can be cast to a certain value type. Consider the following code :

At first glance you might assume that if you called “/api/controller/abc” that the route would not match – It would make sense since the id parameter is an integer. But infact what happens is that the route is matched and the id is bound as 0. This is where route constraints come in. Consider the following :

Now if the id in the URL is not able to be cast to an integer, the route is not matched.

You can do this type of constraints with int, float, decimal, double, long, guid, bool and datetime.

Size Constraints

There are two types of “size” constraints you can use in routes. The first is to do with strings and means you can set a minimum length, max length or even a range.

This sets a minimum length for the string value. You can also use maxlength to limit the length.

Alternatively, you can set how many characters a string can be within a range using the length property.

While that’s great for string variables, for integers you can use the min/max/range constraints in a similar fashion.

Regex Constraints

Regex constraints are a great way to limit a string input. By now most should know exactly what regex is so there isn’t much point doing a deep dive on how to format your regex, just throw it in as a constraint and away it goes.

It is worth noting there for whatever reason, the .NET core team added another handy “quick” way of doing alpha characters only instead of regex. There you can just use the constraint of “alpha”.

 

Culture in ASP.net has always been a bit finicky to get right. In ASP.net core there is no exception but with the addition of middleware, things are now configured a little differently. Lifting and shifting your standard ASP.net code may not always work in ASP.net core, and at the very least, it won’t be done in the new “ASP.net Core way”.

It should be noted that there are two “cultures” in the .net eco system. The first is labelled just “Culture”. This handles things like date formatting, money, how decimals are displayed etc. There is a second culture called “UICulture” in .net, this usually handles full language translations using resource files. For the purpose of this article, we will just be handling the former. All examples will be given using datetime strings that will change if you are in the USA (MM-dd-yyyy) or anywhere else in the world (dd-MM-yyyy).

Setting A Default Request Culture

If you are just looking for a way to hardcode a particular culture for all requests, this is pretty easy to do.

In the ConfigureServices method of your startup.cs, you can set the DefaultRequestCulture property of the RequestLocalizationOptions object.

In the Configure method of your app, you also need to set up the middleware that actually sets the culture. This should be placed atleast before your UseMvc call, and likely as one of the very first middlewares in your pipeline if you intend to use Culture anywhere else in your app.

I can write a simple API action like so :

When calling this action I can see the date is output as MM-dd-yyyy (I am from NZ so this is not the usual output!)

Default Request Culture Providers

When you use the request localization out of the box it comes with a few “providers” that you (Or a user) can use to override the default culture you set. This can be handy if your app expects these, otherwise it can actually be a hindrance if you expect your application to be displayed in a certain locale only.

The default providers are :

QueryStringRequestCultureProvider
This provider allows you to pass a query string to override the culture like so http://localhost:5000/api/home?culture=en-nz

CookieRequestCultureProvider
If the user has a cookie named “.AspNetCore.Culture” and the contents is in the format of “c=en-nz|uic=en-nz”, then it will override the culture.

AcceptLanguageHeaderRequestCultureProvider
Your browser by default actually sends through a culture that it wishes to use. A browser typically sends through your native language, but it will throw in a couple of “options”. So for example my browser currently ends through “en-GB and “en-US” (Even though I am in NZ).

This header in particular can become very problematic. If the default culture of your *server* matches the culture that a browser sends. Then by default it doesn’t matter what you set the “DefaultRequestCulture” to, this will override it.

There are two ways to keep this from happening. The first is to tell ASP.net core that the only supported cultures are the one that you want the default to be. You can do this in the ConfigureServices method of your startup.cs.

The second way is to remove the CultureProviders from the pipeline all together.

The second method should be done in anycase if you don’t intend to use them as this makes your pipeline much cleaner. It will no longer be running a bunch of code that you actually have no use for.

Supported Cultures List

If you do intend to allow the user to set their own culture in their browser, you intend to use a query string to define the culture, or you intend to do a custom request culture provider (outlined in the next section) to allow code to set a custom culture based on other parameters, then you need to provide a list of supported cultures.

Supported cultures is a list of cultures that your web app will be able to run under. By default this is set to a the culture of the machine. Even if you use a custom culture provider as outlined below to set a culture, it actually won’t be set unless the culture is in the supported list.

You can set the list as follows :

So in this example, I have set the default localization to “en-US”. But if a user comes through with a cultureprovider that wants to set them to en-NZ, the code will allow this. If for example a user came through with a culture of “en-GB”, this would not be allowed and they would be forced to use “en-US”.

Custom Request Culture Providers

Going beyond hard coding a particular culture, often you will want to set culture based on a particular user’s settings. In a global application for example, you may have saved into a database what datetime format a user wants. In this scenario, you can use a custom RequestCultureProvider class.

The code looks something like the following :

Inside your ConfigureServices method in startup.cs, you then need to add your new CultureProvider to the RequestCultureProviders list. In the below example I have cleared out all other providers as I don’t care about query strings, cookies or the accept language header.

However let’s say that you do care, and if your custom provider can’t sort out a culture you want it to try query strings, cookies and the accept header. Then the best way is to insert it at the start of the list. Remember that these are run in order, the first provider that returns a culture instead of null wins.

Remember! It doesn’t matter what your custom culture provider returns if the culture it tries to set is not in the supported culture list. This is extremely important and the cause of many headaches!

Pretty much any project that allows file uploads is going to utilize some cloud storage provider. It could be rackspace, it could be GCloud, it could be AWS S3, or in this post’s case, it’s going to be Azure Blob Storage.

Using Blob Storage in .net core really isn’t that different to using it in the full framework, but it’s worth quickly going through it with some quickfire examples.

Getting Required Azure Details

By this stage, you should already have an Azure account set up. Azure has a free tier for almost all products for the first year so you can usually get some small apps up and running completely free of charge. Blob Storage actually doesn’t have a free tier, but if you upload just a single file it’s literally cents to store a GB of data. Crazy. I should also note that if you have a MSDN subscription from your work, then you get $150 a month in Azure credits for the lifetime of your MSDN subscription.

Once you have your account setup, you need to head into the Azure portal and follow the instructions to create a storage account. Once created, inside your storage account settings you need to find your access keys.

The end screen should look like this (Minus the fact I’ve blurred out my keys) :

Write down your storage name and your keys, you will need these later to connect via code to your storage account.

Project/Nuget Setup

For this tutorial I’m just working inside a console application. That’s mostly because it’s easier to try and retry things when we are working on something new. And it means that we don’t have to fiddle around with other ASP.net boilerplate work. It’s just open our main and start writing code.

Inside your project, you need to run the following commands from your Nuget Package Manager.

This is all you need! It should be noted that the same package will be used if you are developing on full framework too.

Getting/Creating A Container

Containers are just like S3 Buckets in Amazon, they are a “bucket” or “collection” to throw your files into. You can create containers via the portal and some might argue this is definitely safer, but where is the fun in that! So let’s get going and create our own container through code.

First, we need to create the object to hold all our storage account details and then create a “client” to do our bidding. The code looks a bit like this :

When we create our storage credentials object, we pass it in the account name and account key we retrieved from the Azure portal. We then create a cloud storage account object – the second param in this constructor is whether we want to use HTTPS. Unless there is some specific reason you don’t… just put true here. And then we go ahead and create our client.

Next we actually want to create our container via code. It looks a bit like this :

That container object is going to be the keys to the world. Almost everything you do in blob storage will be run off that object.

So our full C# code for our console app that creates a container called “mycontainer” in our Azure Cloud Storage account looks like the following :

It’s important to note that every remote action in the storage library is async. Therefore I’ve had to do a quick wrap of an async method in our main entry point to get async working. Obviously when you are working inside ASP.net Core you won’t have this issue (And as a side note, from C# 7.1 you also won’t have an issue as they are adding async entry points to console applications to get around this wrapper).

Running this code, and viewing the overview of the storage account, we can see that our container has been created.

Uploading A File

Uploading a blob is incredibly simple. If you already have the file on disk, you can upload it simply by creating a reference to your blob (That doesn’t exist already), and uploading.

You can also upload direct from a stream (Great if you are doing a direct pass through of a file upload on the web), or even upload a raw string.

The important thing to remember is that in the cloud, a file is just a blob. While you can name your files in the cloud with file extensions, in the cloud it really doesn’t care about mimetypes or anything along those lines. It’s just a bucket to put whatever you want in.

Downloading A File

Downloading a file via C# code is just as easy.

Similar to uploading a file, you can download files to a string, stream or byte array also.

Leasing A File

Leasing is an interesting concept and one that has more uses than you might think. The original intent is that while a user has downloaded a file and is possibly editing it, you can ensure that no other thread/app can access that file for a set amount of time. This amount of time is a max of 1 minute, but the lease can be renewed indefinitely. This makes sense and it means if you are building some complex file management system, it is definitely handy.

But actually leasing has another great use and that is handling race conditions across apps that have been scaled out. You might typically see Redis used for this, but blob storage also works too.

Consider the following code :

When the second AcquireLease runs, an exception is thrown that a lease cannot be obtained – and can’t be obtained for another 30 seconds. A full rundown on using leasing is too much for this post, but it is not too much work to write code that can acquireleases on a blob, and if a lease cannot be obtained, do a spin wait until a lock can be obtained.

In a previous article we talked about using CSRF Tokens to protect against CSRF attacks. But their main usage was in using the Razor helpers to build a web application in ASP.net Core. But what if you are building a SPA using something like AngularJS? Your forms will no longer have access to the Razor helpers, but you still want to protect against CSRF exploits.

Luckily AngularJS actually has a handy “helper” that will add CSRF tokens as a header automatically as long as it can find a particular cookie. And in actual fact, even though Angular does most of the leg work for us and makes it easy, you can use any flavor of javascript frameworks using this pattern. At the end of the article we even show how you can get up and running using something as plain as jQuery. Let’s jump into it!

Anti Forgery Setup

Later on we will delve into how AngularJS works with CSRF Tokens, but for now what you need to know is that Angular will be sending the token in a header called “X-XSRF-TOKEN”. We need to let our API know this and expect it.

Inside your startup.cs inside your ConfigureServices method, you will need a call to “AddAntiforgery” and we will set the headername inside there.

Setting The Cookie On The Initial Request

Now we need to return a cookie to our front end. There are 2 different ways to do this. You create an endpoint that does nothing more than return a response code of 200 and returns the cookie in the response, or you make it so when your index file is returned, the cookie is returned along with it.

Let’s go with the latter for now.

If you are returning your index file as a C# view, then you will need to inject the IAntiforgery service into your controller, generate a new token, and set it as a cookie. It looks like this :

Now importantly, the cookie name is XSRF-TOKEN and not X-XSRF-TOKEN. The cookie is missing the X on purpose (This catches people out!). And the other important thing is that HttpOnly is set to false on the cookie meaning that javascript is able to read the cookie. Generally speaking, if the cookie holds any sensitive info (Such as authentication cookies), you do not want to do this, but here it’s required.

If you are using a static index.html file that is not returned as a view. You will need to use middleware instead. In your startup.cs file you will have a method called “Configure”. On this method you need to add a parameter called IAntiforgery. Don’t worry too much about the order as this is all worked out using the servicecollection anyway. From there you will want to check if the user is requesting the root file (index.html) and if they are add a cookie onto the response.

Important! This middleware needs to come before the call to UseStaticFiles. The StaticFiles middleware short circuits the pipeline and returns the static file immediately meaning your cookie won’t be set.

Setting The Cookie On Demand

A common scenario is going to be that the API and the front end are actually two seperate websites probably hosted on different boxes. With this in mind you won’t be serving any of the front end code from the API box, so you can’t set the cookie when the index.html file is requested.

In this scenario, you will need to setup an endpoint that will return the token for you. This could actually be after a user logs on, or it could be an endpoint that does nothing more than return a 200 and sets the cookie (Possibly with expiration). It will look something like this :

The front end can call this endpoint at any point to generate a new CSRF token.

Using AngularJS

So what is there left to do in AngularJS? Well… actually nothing.

When $http is used inside Angular, it will now go ahead and look through your browser cookies for something named “XSRF-TOKEN”. If it finds it, it will then add a header to every request called “X-XSRF-TOKEN” to every request. You don’t have to set anything special up inside AngularJS… Phew.

Using jQuery

So jQuery isn’t going to be quite as easy as AngularJS, but the process will still be the same.

We are going to use the handy JS Cookie library that allows us to easily grab out values from Cookies from the front end. Then we are going to take that token and send it as a header with the name X-XSRF-TOKEN.

CSRF or Cross Site Request Forgery is a type of web attack that uses a users own browser to post a form from one site to another. It works like so :

  • User logs into www.mybankaccount.com and receives a cookie.
  • Sometime later the user goes to www.malicioussite.com and is shown a completely innocuous form on the surface. Something like :
  • If a user clicks the submit button. Because they are still logged in and have a cookie, the POST request is accepted

Unsurprisingly Wikipedia actually has a pretty good article on the subject and explains it a little better than I do : https://en.wikipedia.org/wiki/Cross-site_request_forgery

The most common way to protect against CSRF attacks is by using a token that is returned on a GET request for a form, and must be present for a POST request to complete. The token must be unique to the user (Cannot be shared), and can either be per “session” or per “request” based. As always, .net core has done most of the leg work for us.

Generating Tokens

If you build a form using ASP.net core helpers then a CSRF token is automatically generated for you without any extra code required.

So both of these :

and

End up generating a form with a request verification token built in :

If you are building forms without using the ASP.net helpers, then you can generate a token manually :

Validating Tokens

What good is generating tokens if you don’t validate them! There are two filters in ASP.net core to validate tokens as part of the request pipeline :

  • ValidateAntiForgeryToken – This filter validates the request token on each and every method it’s placed on regardless of the HTTP verb (So it validates even on GET requests).
  • AutoValidateAntiforgeryToken – This filter is almost the same, but it doesn’t validate tokens on GET, HEAD, OPTIONS and TRACE requests.

Typically you are going to use the second option. Very rarely are you going to want to validate GET requests, but it’s there if you need it.

You can apply the filters on an entire controller or action by using it as an attribute :

Or you can apply the filter globally if you prefer. In your startup.cs file, find your ConfigureServices method. You should already have a call to AddMvc but you can add a global filter like so :

Adding A Passthrough

You may end up with an endpoint that you actually don’t want to validate CSRF tokens. It may legitimately be receiving POST requests from an outside source. If you have already applied the validate attributes to your controller or globally but need to allow an exception through, you can use the IgnoreAntiforgeryToken attribute on both an Action or a Controller.

 

The web has been going ballistic over a proposed ASP.net Core change.

Exhibit A : https://github.com/aspnet/Home/issues/2022

ASP.net Core was now going to run solely on…. .net core. Sounds confusing? Keep reading.

It’s been interesting to read reaction on message boards such as Reddit and HN. Mostly because people take the opportunity to lay the smack down on Microsoft without full understanding the situation. Certainly if you don’t use .net core on a daily basis, it just looks like Microsoft doing whatever it wants developers be damned.

But here’s the thing. I actually agree with Microsoft on this one to a certain degree. And I actually think their eventual decision to not press forward with the changes could actually hurt ASP.net core in the long run. This post is a bit of a riff off a Reddit comment I also made so it may be a bit over the place.

Let me explain.

There is currently 3 (main) ways to write a web application in the .net ecosystem

  • ASP.net Core
  • ASP.net Full Framework
  • ASP.net Core running on Full Framework

That last one is probably the eye brow raiser and the one that sounds the most “cobbled together”. And it’s actually the one that everyone is fighting over.

You see you can actually run ASP.net Core on the full framework. You get all the benefits that the ASP.net Core team gives you (Kestrel etc), but you still have the ability to use full framework libraries. Sounds like a win win right? Kinda.

The reason this works is because ASP.net Core is written against .net standard. That means any classes/methods it calls will also be available in full framework. But it also limits the ability for ASP.net Core to innovate. They are restricting their own platform to keep the ability of running on the full framework.

So how does the .net standard work? There is another article on this site talking about the standard in general over here, but I’ll do my best to explain it in the context of the ASP.net Core 2.0 debacle.

.net standard is essentially an agreement between the various .net platforms (UWP, .net Full Framework, .net Core, Mono etc), that they will all implement the same functionality. Think of it like an interface where a class can write an implementation of a method however they like, but the important thing is that they implement it in the first place. The issue with the standard as it is today is that everyone has to move together. If the .net core team says “OK, we would like this new thing in the standard please”, they have to wait for agreement from another platform to put it into the standard and implement on their end too (Usually this is going to be from the full framework team).

The problem that the ASP.net core team is facing right now is that they have almost zero baggage and can move at an extreme pace. They are able to innovate and throw things into their platform as fast as a developer can code really. That’s why people are using ASP.net core running on full framework, because they are enjoying the performance improvements that ASP.net core offers. But for them to keep improving AND for ASP.net core to keep supporting the ability to run on full framework, they need the full framework to add things almost as fast so that new versions of the .net standard can be released.

That’s not happening.

And what that means is that if ASP.net Core writes a new feature that utilizes something that is NOT in the .net standard at that point, they cannot actually release it until the .net framework catches up. The ASP.net Core team has basically said that for them to keep moving at the same pace, they are going to have to make their libraries work on .net core only so that their only limitation to releasing new features is their own ability to write them.

To me that’s understandable. They probably need some sort of pathway for those that have been caught in the middle of this hybrid approach. But we can’t have a web framework that everyone loves for it’s ability to innovate be tied down by a legacy frameworks ability to do the same.

Microsoft has backtracked and said that they will continue supporting ASP.net core running on the full .net framework. I think it will be interesting a year from now to see how this pans out. Giving something like another years support seems look a good plan, but I don’t like the open ended support “we will see in a year” type approach.

What’s your thoughts?