In a previous post I talked about building cross platform GUI applications using Eto.Forms. One thing that came up was that to run the applications on Linux, you needed to have Mono installed and to run mono with the executable as an input argument. That’s probably fine if you are building a web application and it’s going to be on limited servers, but if you are distributing a desktop application you may not want to have to guide a user into downloading and using mono themselves.

Luckily, there is actually a way of bundling mono with your application that removes the need for a user to have to download, install, and then run mono. This means you can distribute your application to a Linux machine without any worry about prerequisites.Now I know this isn’t really .NET Core, but one of the most important things about Core is the cross platform-ness. And since Core doesn’t have desktop application support, I may aswell write a little bit on how to make that happen using Mono.

Setup and Install

The craziest thing about looking up how to use Mkbundle on Windows is that there is very little information about it in the past 5 years. Everything I found was from 2013 or earlier, and it was very outdated. It often involved downloading Cygwin and fiddling with paths and environment variables until things just sort of fell into place. To be fair, there comes a time in every Windows developers life when they resign themselves to the fact they will have to install Cygwin for whatever flaky piece of software they need to use, but today is not that day!

Head over to the Mono website and download the latest version for your PC : https://www.mono-project.com/download/stable/

I personally went with the 32 bit, just because everything I read out there was using that version and I didn’t want to run into an issue that “needed” the 32 bit version after all.


This part is important so I’ve put it in bold and put it between horizontal rules just to make sure you read this part! 

After install comes an extremely important step. This might sound like a little side bar but please, if this is your first time trying mkbundle on Windows, you will thank me later. If you try and run MKBundle right away, you will get an error that likely looks like this :

ERROR: The SDK location does not contain a C:\Program Files (x86)\Mono/bin/mono runtime

I smacked my head against the wall for an age with this. Eventually I actually tracked down the source code for mkbundle on Github. And found this line here : https://github.com/mono/mono/blob/master/mcs/tools/mkbundle/mkbundle.cs#L536

What it’s trying to do is check that you have a *file* called “C:\Program Files (x86)\mono\bin\mono”. Now I bold that part about it being a file, because it’s not looking for a directory. On non Windows systems, files don’t need to have extensions (like .exe), but on Windows they typically do. So what we actually need to do is make sure that this “test” passes. And it’s simple. Go to your mono/bin folder. In there, you should find a mono.exe. Make a *copy* of this file, and remove the extension so it is simply called “mono”. And it should sit side by side with your existing exe.

Now this should satisfy the file check and mono should run fine. I have logged a ticket on Github with Mono around this issue here : https://github.com/mono/mono/issues/7731 . So if you have the same problem or you’re coming from Google after smacking your head against the desk repeatedly with this issue, jump on and add your 2 cents!


Fetching The Correct Mono Runtime

OK with that out of the way, after installing everything you should now have a “Mono Command Prompt” available to you on your machine. Just type “Mono” in your start menu and it should pop up!

This works essentially like a regular command prompt, with the Mono commands already built in.

Now the next step is a little tricky. It’s not like you can run mkbundle and suddenly you have an executable for every OS in existence. Instead you need to fetch the runtime for the particular OS you want your application to run on, and bundle it for that particular runtime. If we package an exe with mkbundle right now then the only people that can run that application are people with the same OS (In our case Windows). This is not as dumb as you might first think. If we did do this, it would mean we could distribute an application that could run on Windows without Mono (Obviously), but more importantly without .NET Framework. There is definitely times where this could come in handy, but for now, we want to build for Linux, so let’s do that.

There is supposed to be commands to fetch and download various runtimes to your machine right from the command prompt. Ofcourse, Mono being a flaky POS at times (Sorry, it’s getting irritating working through these issues), the command doesn’t work at all. Instead if we run the command that “should” fetch available runtimes we get :

System.Net.WebException: Error: TrustFailure (The authentication or decryption has failed.)

And if instead we try the command that supposedly will download the runtime we want (For example we know the runtime signature so just slam that in), we will get :

Failure to download the specified runtime from https://download.mono-project.com/runtimes/raw/

So, we have to do everything manually.

First go to this URL : https://download.mono-project.com/runtimes/raw/. You need download the runtime for the particular OS you want to compile to. Once downloaded, you need to extract this to a particular directory in your documents folder. If I downloaded mono 5.10.0 for Ubuntu, then my directory that I extract to should be :  C:\Users\myuser\Documents\.mono\targets\mono-5.10.0-ubuntu-16.04-x64/ .

Once this has been done, we should then be able to run a command within the mono command prompt :  mkbundle --local-targets . The output of this should be all “targets” we have available to us.

mkbundle Command

Navigate to your applications directory that you want to “bundle”. In my case I’ve created a simple “HelloWorldConsole” application that does nothing but print out “Hello Mono World”. Inside this directory I run the following command  mkbundle HelloWorldConsole.exe --simple -o HelloWorldBundleUbuntu --cross mono-5.10.0-ubuntu-16.04-x64  Where HelloWorldConsole.exe is my built console app, the -o flag is what I want the output filename to be, and the –cross flag tells us which runtime we want to compile for.

Looks good to me! But let’s test it. Just for comparison sake, I also run the following command :  mkbundle HelloWorldConsole.exe --simple -o HelloWorldBundleDefault . This will give us a bundled application but it will be using our default mono runtime (Which in this case is Windows). This will be important later for showing the differences.

Just quickly, I also want to show the size difference between our original application, and our bundled app.

 

So in terms of bundling, using Windows mono we add about 4mb. For the Ubuntu bundle, we added 8mb. Your guess is as good as mine when it comes to why the huge size difference, but the main thing is that adding 4 – 8mb actually isn’t that bad when you consider a user now doesn’t have to worry about download mono themselves. That’s actually a relatively small bundle when you think about other assets that may be going along with the app like sound, sprites, images etc.

Let’s go ahead and copy these two bundles to our Ubuntu machine for testing.

First we will take a look at our default bundle. Now this shouldn’t work at all because it’s been built for Windows. We even have a command that we can use to check the file type. So let’s try that.

So straight away it’s telling us this is for Windows. And when we run it…

Yep, so it ain’t happening. Let’s instead try our Ubuntu bundle. First the file command to see if it recognizes that it’s a different sort of application :

So even though under the hood it’s actually the exact same C# executable, it’s been compiled correctly for Ubuntu. And when we run it :

Yes! We did it! It only took me a day of headaches to work out every little issue with mkbundle to get a HelloWorld bundle working on Ubuntu! But we did it!

Final Notes

One final thing I want to say is Mono/mkbundle is pretty crap to work with on Windows. I know there is definitely going to be someone out there that hates me for saying that, but it truly is. Some of these issues that I ran into (Notably the fact it tries to check for an extension-less mono exe when running mkbundle) I can see stackoverflow questions from over a year ago having the exact same problem. That means that these problems are nothing new.

Even the documentation was rife with issues on Windows. Simply put, they didn’t work. In some ways I can understand that Mono is built for non Windows machines in mind, but it is shockingly bad for a product that Microsoft has now taken under it’s wing.

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed


Packaging up your nuget package and pushing it to your feed is surprisingly simple. The actual push may vary depending on whether you are using a hosted service such as MyGet/VSTS or using your own server. If you’ve gone with a third party service then it may pay to read into documentation because there may be additional security requirements when adding to your feed. Let’s get started!

Packaging

If you have gone with building your library in .NET Standard (Which you should), then the entire packaging process comes down to a single command. Inside your project folder, run  dotnet pack and that’s it! The pack command will restore package dependencies, build your project, and package it into a .nupkg.

You should see something along the lines of the following :

While in the previous article in this series, we talked about managing the version of your nuget package through a props file, you can actually override this version number on the command line. It looks a bit like this  :

I’m not a huge fan of this as it means the version will be managed by your build system which can sometimes be a bit hard to wrangle, but it’s definitely an option if you prefer it.

An issue I ran into early was that many tutorials talk about using nuget.exe from the command line to package things up. The dotnet pack command is actually supposed to run the same nuget command under the hood, but I continually got the following exception :

After a quick google it seemed like I wasn’t the only one with this issue. Switching to the dotnet pack command seemed to resolve it.

Publishing

Publishing might depend on whether you ended up going with a third party hosted nuget service or the official nuget server. A third party service might have it’s own way of publishing packages to a feed (All the way to manually uploading them), but many will still allow the ability to do a “nuget push” command.

The push command looks like the following :

Again this command is supposed to map directly to the underlying nuget.exe command (And it actually worked for me), but it’s better to just use the dotnet command instead.

If you are using a build server like VSTS, Team City, or Jenkins, they likely have a step for publishing a nuget package that means you don’t have to worry about the format of the push command.

Working With VSTS

For my actual nuget server, I ended up going with Microsoft VSTS as the company I am working with already had their builds/releases and source control using this service. At first i was kinda apprehensive because other VSTS services I’ve used in the past have been pretty half assed. So color me completely surprised when their package hosting service was super easy to use and had some really nifty features.

Once a feed has been created in VSTS, publishing to it is as easy as selecting it from a drop down list in your build.

After publishing packages, you can then set them to be prerelease or general release too. In a small dev shop this may seem overkill, but in large distributed teams being able to manage this release process in a more granular way is pretty nifty.

You can even unlist packages (So that they can’t be seen in the feed, but they aren’t completely deleted) see a complete history of publishes to the feed, and how many times the package has been downloaded. Without sounding like a complete shill, Microsoft definitely got this addon right.

Summary

So that’s it. From setting up the server (Or hosted service as it turned out), developing the packages (Including wrangling version numbers), and putting everything together with our build server, it’s been surprisingly easy to get a nuget server up and running. If you’ve followed along and setup your own server, or you’ve already got one up and running (Possibly on a third party service), drop a comment and let me know how it’s working out!

This article is part of a series on setting up a private nuget server. See below for links to other articles in the series.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed


In our previous article in this series, we looked at how we might go about setting up a nuget server. With that all done and dusted, it’s on to actually creating the libraries ourselves. Through trial and error and a bit of playing around, this article will dive into how best to go about developing the packages themselves. It will include how we version packages, how we go about limiting dependencies, and in general a few best practices to play with.

Use .NET Standard

An obvious place to begin is what framework to target. In nearly all circumstances, you are going to want to target .NET Standard (See here to know what .NET Standard actually is). This will allow you to target .NET Core, Full Framework and other smaller runtimes like UWP or Mono.

The only exception to the rule is going to come when you are building something especially specific for those runtimes. For example a windows form library that will have to target full framework.

Limit Dependencies

Your libraries should be built in such a way that someone consuming the library can use as little or as much as they want. Often this shows up when you add in dependencies to your library that only a select people need to use. Remember that regardless of whether someone uses that feature or not, they will need to drag that dependency along with them (And it may also cause versioning issues on top of that).

Let’s look at a real world example. Let’s say I’m building a logging library. Somewhere along the way I think that it would be handy to tie everything up using Autofac dependency injection as that’s what I mostly use in my projects. The question becomes, do I add Autofac to the main library? If I do this, anyone who wants to use my library needs to either also use Autofac, or just have a dependency against it even though they don’t use the feature. Annoying!

The solution is to create a second project within the solution that holds all our Autofac code and generate a secondary package from this project. It would end up looking something like this :

Doing this means that anyone who wants to also use Autofac can reference the secondary package (Which also has a dependency on the core package), and anyone who wants to roll their own dependency injection (Or none at all) doesn’t need to reference it at all.

I used to think it was annoying when you searched for a nuget package and found all these tiny granular packages. It did look confusing. But any trip to DLL Hell will make you thankful that you can pick and choose what you reference.

Semantic Versioning

It’s worth noting that while many Microsoft products use a 4 point versioning system (<major>.<minor>.<build>.<revision>), Nuget packages work off whats called “Semantic Versioning” or SemVer for short. It’s a 3 point versioning system that looks like <major>.<minor>.<patch>. And it’s actually really simple and makes sense when you look at how version points are incremented.

<major> is updated when a new feature is released that breaks backwards compatibility. For example if someone upgrades from version 1.2.0 to 2.0.0, they will know that their code very likely will have to change to accommodate the upgrade.

<minor> is updated when a new feature is released that is backwards compatible. A user upgrade a minor version should not have to touch their existing code for things to continue working as per normal.

<patch> is updated for bug fixes. Bug fixes should always be backwards compatible. If they aren’t able to be, then you will be required to update the major version number instead.

We’ll take a look at how we set this in the next section, but it’s also important to point out that there will essentially be 3 version numbers to set. I found it’s easier to just set them all to exactly the same value unless there is a really specific reason to not do so. This means that for things like the “Assembly Version” where it uses the Microsoft 4 point system, the last point is always just zero.

Package Information

In Visual Studio if you right click on a project and select properties, then select “Package” on the left hand menu. You will be given a list of properties that your package will take on. These include versions, the company name, authors and descriptions.

You can set them here, but they actually then get written to the csproj. I personally found it easier to edit the project file directly. It ends up looking a bit like this :

If you edit the csproj directly, then head back to Visual Studio properties pane to take a look, you’ll see that they have now been updated.

The most important thing here is going to be all the different versions seen here. Again, it’s best if these all match with “Version” being a SemVer with 3 points of versioning.

For a full list of what can be set here, check the official MS Documentation for csproj here.

Shared Package Information/Version Bumping

If we use the example solution above in the section for “Limit Dependencies” where we have multiple projects that we want to publish nuget packages for. Then we will likely want to be able to share various metadata like Company, Copyright etc among all projects from a single source file rather than having to update each one by one. This might extend to version bumping too, where for simplicity sake when we release a new version of any package in our solution, we want to bump the version of everything.

My initial thought was to use a linked “AssemblyInfo.cs” file. It’s where you see things like this :

Look familar? The only issue was that there seemed to be quite a few properties that couldn’t be set using this, for example the “Author” metadata. And secondly when publishing a .NET Standard nuget package, it seemed to completely ignore all “version” attributes in the AssemblyInfo.cs. Meaning that every package was labelled as version 1.0.0

By chance I had heard about a feature of “importing” shared csproj fields from an external file. And as it turned out, there was a way to place a file in the “root” directory of your solution, and have all csproj files automatically read from this.

The first step is to create a file called “Directory.Build.props” in the root of your solution. Note that the name actually is “Directory”, do not replace this for the actual directory name (For some stupid reason I thought this was the case).

Inside this file, add in the following :

And by magic, all projects underneath this directory will overwrite their metadata with the data from this file. Using this, we can bump versions or change the copyright notice of all child projects without having to go one by one. And ontop of that, it still allows us to put a description in each individual csproj too!

Summary

The hardest part about all of this was probably the shared assembly info and package information. It took me quite a bit of trial and error with the AssemblyInfo.cs file to work out it wasn’t going to happen, and surprisingly there was very little documentation about using a .props file to solve the issue.

The final article in the series will look into how we package and publish to our nuget feed. Check it out here!

I’ve been having a bit of fun setting up a Nuget Server as of late, and learning the nuances of versioning a .NET Standard library. With that in mind, I thought I would document my approach to how I got things going and all the pitfalls and dead ends I ended up running into. This series will read a bit less like a stock standard tutorial and more about a brain dump of what I’ve learnt over the past few days. I should also note that in some cases, we aren’t able to go all out .NET Core mode. For example the official Nuget Server web application is full framework only – not a whole lot we can do about that.

The series will be broken into three parts. The server setup and the different options you have for actually hosting your nuget feed, different ways you can go about structuring your project and code for ease of use when packaging into a nuget feed, and how to go about actually building your package and pushing it to your server.

Part 1 : Intro & Server Setup
Part 2 : Developing Nuget Packages
Part 3 : Building & Pushing Packages To Your Nuget Feed

Why?

The first question you may ask is why? Why have a private nuget server? It may seem like a bit overkill but there are two scenarios where a private nuget feed will come in handy.

  1. If you or your company work on multiple projects and end up copying and pasting the same boilerplate code into every project. That’s a good reason to setup a nuget server. The actual impetuous for me to go ahead and set one up was having to copy in our custom “logging” code for the *nth* time into a project. This is pretty common in services companies where projects last 3-6 months before moving onto the next client.
  2. The second reason is similar, but it’s more around your architecture. If you have an architecture focused on microservices either by way of API’s or ESBs, then it’s likely that you are working on lots of smaller standalone projects. These projects will likely share similar code bases for logging, data access etc. Being able to plug in a custom nuget feed and grab the latest version of this shared code is a huge time saver and avoids costly mistakes.

Hosted Services

There are a number of services that will host your nuget feed for you. These usually have the bonus of added security, a nice GUI to manage things, and the added benefit of a bit of support when things go wrong. Obviously the downsides are that they aren’t free, and it also means that your packages are off-premises which could be a deal breaker for some folks.

An important thing to note is that almost all hosted solutions I looked at offered other package manager types included in your subscription. So for example hosted NPM feeds if you also want to publish NPM packages. This is something else to keep in mind if you are looking to go down this route.

I looked into two different services.

MyGet

MyGet seems to be the consensus best hosted solution out there if you are looking for something outside the Microsoft eco-system. Pricing seemed extremely cheap with only limited contributors but unlimited “readers” – let me know if I got that wrong because I re-read the FAQ over and over trying to understand it. They do offer things like ADFS integration for enterprise customers, but that’s when things start to get a tad expensive.

I had a quick play around with MyGet but nothing too indepth and honestly it seemed pretty solid. There wasn’t any major feature that I was wow’d by, but then again I was only publishing a few packages there trying things out, nothing too massive.

Something that was interesting to me was that MyGet will actually build your nuget packages for you, all you need to do is give it access to your source control.

This seems like a pretty helpful service at first, but then again, they aren’t going to be running your pipeline with unit tests and the like, so maybe not too useful. But it’s pretty nifty none the less.

Visual Studio Team Services

Microsoft VSTS also offer a hosted package manager solution (Get started here). I think as with most things in the VSTS/TFS world, it only makes sense if you are already in that eco-system or looking to dive in.

The pricing for VSTS works in two ways. It’s free for the first 5 users and every user from there costs approx $4 each. Users count as readers too so you can’t even consume a nuget package without a license (Which is pretty annoying). OR If a user has an “Enterprise” license (Basically an MSDN Subscription which most companies in the Microsoft eco-system already have), then they can use the VSTS package manager for free. As I say, this really only makes sense if you are already using VSTS already.

Now obviously when you use VSTS Package Manager, it’s also integrated into builds so it’s super easy to push new packages to the feed with a handy little drop down showing feeds already in VSTS Package Manager. I mean it’s not rocket science but it was kinda nice none the less.

The GUI for managing packages is also really slick with the ability to search, promote and unlist packages all from within VSTS.

Official Nuget Server

The alternative to using a hosted solution is to have an on-premises private nuget feed (Or possibly on a VM). There are actually a few different open source versions out there, but for now I’ll focus on the official nuget server package from Microsoft. You can check out instructions to get it up and running here.

Now this was actually my preferred option at first. To keep control of the packages and have complete control of the hosting seemed like the best option. But that quickly changes for a few reasons…

Firstly, there is zero privacy/authorization on consuming packages from the nuget server. This means you either have to run with an IP whitelist (Which IMO is a huge pain in the ass), host locally, or just pray that no one finds the URL/IP of your server. This seemed like a pretty big deal breaker.

There is an API Key authorization that you can set in the web.config, but this only restricts publishing packages, not consuming them. This in itself seemed like an annoyance because it meant that the keys to the kingdom was an API Key and with that you could essentially do what you liked with it. Because there are no “promoting” of packages in the feed (Everything published is automatically the latest version in the feed), this seemed like it could lead to trouble.

The packages themselves are actually stored on the server disk with no ability to offload them to something like Blob/S3. This seemed alright at first (Disk space is pretty cheap), but it also means you can’t use a hosted solution like Azure Websites because when deploying to these, it blows away the entire previous running instance – with your packages getting the treatment at the same time. This means you had to use actual hardware like a VM or host it locally on a machine.

So the obvious thought was, let’s just host it locally in the office in a closet. This gives us complete control and means no one from the outside can access it. But, what if we do want someone from the outside world to access it? Like if someone is working from home? Well then we could setup a VPN and th…. you know what. Forget it. Let’s use VSTS.

Not to mention there is no GUI to speak of :

There is a package called “Nuget Gallery” that seems to add a facelift to everything, but by this point it was a lost cause.

Summary

In the end I went with VSTS. Mostly because the company I am working with already has VSTS built into their eco-system for both builds and source control, it just made sense. If they didn’t, I would have certainly gone with MyGet over trying to host my own. Hosted solutions just seemed be much more feature rich and often for pennies.

Next up we will be talking about how I structured the code when building my libraries, and some tricks I learnt about .NET Standard versioning (Seriously – so much harder than I thought). Check it out here!

Swagger is an auto-magically generated API documenting tool. It takes any standard Web API project and can generate amazing looking (And functioning) docs without a user having to write a single additional line of documentation. Best of all, it can be as simple as a 2 line setup, or as complex as adding additional info to every single API endpoint to explode the level of info inside Swagger.

Getting Started

For the purpose of this guide, I’m just going to be using the standard ASP.net Core Web API template when you create a new project from Visual Studio. But any existing API will work just fine too!

First off, install the following Nuget package from your package manager console.

Next in the ConfigureServices method of your startup.cs, add the following code to add the Swagger services to your application.

A couple of things to note here, firstly that inside the SwaggerGen lambda you can actually specify a few more details. As an example :

Here we have said for any enum instead of using the integer value, use the string. And for all parameters can we please use CamelCase. The defaults usually suit most, but if there are specific things you are looking for your docs, you can probably find the setting here.

Secondly is obviously the Info object. Here you can specify things like the documentation author, title, and license among other things.

Head down to the Configure method of your Startup.cs.Add a call to “UseSwagger” and a call to “UseSwaggerUI” Both of these should come before the call to UseMVC.

And that’s it! Navigate your browser to https://localhost:{yourport}/swagger  to view your new API documentation.

If you don’t see anything, or it looks a bit odd, jump to the end of this article for a quick trouble shooting session!

XML Comments

The documentation that is auto generated is usually pretty damn good and if you are building a restful API, is usually enough to explain the functions of your API on their own. But there are times when the API needs a bit more explaining. For that, you can use XML Comments on your API action. For example :

If you are using Visual Studio, you can type three forward slashes in a row and it will auto generate a skeleton set of comments for you. Most importantly is the summary and parameter descriptions that are free text. They are invaluable for being able to explain what an endpoint does and what input it expects.

Next you need to force your application to actually generate the XML data that Swagger can then read. Right click on your project in Visual Studio and select Properties. On the panel that opens up, select “Build” on the left hand side. You should see an option for “Output”, and a checkbox for “Xml documentation file”. Tick this box and the setting will be auto filled out for you.

Note that this setting is per build configuration. If you intend to use Swagger remotely (And therefore likely be built in Release mode before deploying), then you should change the Configuration setting up top on this panel to “Release” and then retick the documentation tickbox.

If you are not using Visual Studio, or you are just interested in how things work behind the scenes. Doing all of this just adds the following line to your csproj file.

Next you need to head back to the ConfigureServices method of your startup.cs and add a call to IncludeXmlComments in your Swagger configuration.

Where SwaggerExample.xml is the xml file you set in your csproj/project configuration.

When you view Swagger again you should now see your XML comments displayed inside the documentation.

Up the top right is our description of our endpoint. And in the id row for our parameters, we also have a description value.

Describing API Response Codes

There may be times where your API returns a non “200” response code that you want to provide documentation for. For example an error of 400 if a particular parameter doesn’t fit certain requirements.

The first step is to decorate your actions with a “Produces” attribute that describes all the possible return codes your endpoint will give out. At the same time you can describe that for a given code, what model you will be returning at the same time. So for example if when you return an error 400, you return a particular class that describes the error, you can define that here.

A quick note that you don’t need to specify the return of 200, that is implied, but it’s nice to add anyway. When you view this endpoint in swagger, the non 200 return codes are displayed at the bottom of the endpoint description.

While this lets you know that certain responses are expected, it doesn’t actually give you the reason why they would be returned. For that, we turn again to XML comments.

Now when we view this endpoint in Swagger again we have the descriptions next to the response codes.

Troubleshooting

I can’t see anything

Check that the nuget package  Microsoft.AspNetCore.StaticFiles is installed in the project. This is required by Swagger to run. If you are unsure, just try installing the package again, this has seriously fixed the issue for me before.

I’m using MVC Core

If you are use the “MVCCore” service rather than just plain MVC. Then you need to explicitly add the API Explorer services. Confused? Head to your ConfigureServices method in your startup.cs. If you see this :

Then you are fine. However if you see this :

Then you need to manually add the ApiExplorer service.

I’m not using “Attribute Routing”

Then Swagger won’t work for you. You must be using attribute routing to use Swagger.

This will be the third and last part of our series that will help you to get up and running to do F# development with .NET Core on Linux.

We’ve made this far and all we’re missing are just the finishing touches.

As promised we’ll be looking at how we can link our build, run and debug workflow into vscode.

The editor expects the project’s configuration to be located on a folder called .vscode  at the project’s root folder, containing to 2 files called launch.json  and tasks.json . We’ll be looking at how we can create and update them both manually and automatically.

Build

Let’s start with an easy one, the build task.

Anywhere in the editor just run the default build command ( ctrl+shift+b ) and vscode will let you know that it couldn’t find a build task for the project and it’ll offer you to create one.

We’re going to select ‘Configure Build Task’ and choose ‘.NET Core’

That’s going to create a tasks.json file inside your .vscode folder (which it’ll get created automatically if it didn’t existed). If the .NET Core option doesn’t show up for you, don’t worry, manually create the tasks.json file and paste the following.

Now we’re ready to build our project! Run the build command again ( ctrl+shift+b ) and this time you’ll see the following output.

Run

Now that we’ve set up our build task, adding one for running our project is trivial. Jump again into the tasks.json file and just add another task to the “tasks” array that looks like the following.

Save the file and fire up vscode ‘Quick Open’ ( ctrl+shift+p ), select ‘Tasks: Run Task’ and you’ll see your newly created ‘run’ task. Once you’ve selected it, vscode ‘output’ panel should show something like

Debug

This will be the last thing we add and it is as easy as the other two with a small caveat.

Remember how I mentioned there were going to be two configuration files ? We’ll make use of the other one now. It order to debug, we will not be creating a task but a launch configuration. Automatically creating the configuration file is as simple as pressing F5  and selecting  .NET Core  as our environment. By default this will create the launch.json file with three default configurations, of which we’ll be only focusing on one for this article.

Right now, you might be tempted to re run debug (F5) because you created the configuration but you’ll quickly be prompted with an error message that reads something like

launch: launch.json must be configured. Change ‘program’ to the path to the executable file that you would like to debug

Don’t worry, we’ll sort it out in seconds.

Once you open up your launch.json, you’ll realize that while vscode created the configuration it hasn’t really specified which program it should debug and instead the path for the program contains two placeholders but because we’ve already built our project we can see what values it is expecting, so go ahead and replace <target-framework>  with netcoreapp1.1  and <project-name.dll>  with fsharp_tutorial.dll .

Your “.NET Core Launch (console)” should now look like this

All that is missing now is giving it a try, so just place a break point and hit debug (F5)

And we’re done! We’ve covered the basic setup to get your F# on linux journey started.

You can find the finished project here

This will be the second part of a series of 3 articles that will help you to get up and running to do F# development with .NET Core on Linux.

Ok, for this particular entry we’ll be talking about tools that’ll improve our development experience.

State of the art

Unfortunately, one of the downsides of not developing on Windows is not having access to the excellent Visual Studio IDE but fear not, we’ll be covering a setup that should make us feel at home. Today’s post will focus on Visual Studio Code (vscode), a fantastic open source code editor developed by Microsoft. Sometime in the future we’ll be covering alternative tools, such as the promising Jetbrains Rider IDE and VIM/Emacs.

Now head onto vscode’s download page and install the package corresponding to your favorite distro (both deb and rpm as well as 32 and 64 bits packages are provided).

If you’ve been following our previous entry, you should have a folder fsharp_tutorial , go ahead and open it.

Extensions

The vscode extensions ecosystem is huge and worth spending some time looking at what it has to offer. For the scope of this series, we’ll only be covering one: ionide-fsharp . This extension comes to us thanks to the people at ionide and on top of the one that we’ll be looking at, they provide two more excellent F# extensions that you should check out.

Go ahead and install it, you’ll be prompted to reload the current window to enable it. If you search for it again, you should see something like this

While the out of the box settings should be enough to get you started if you wanted to change something you can find the extension settings under ‘FSharp configuration’ or by searching ‘fsharp’ in the settings page.

Taking it for a ride

We’ll go over a few things now, so open up our Program.fs  file.

First thing we want to do is to add our add function that we used to test F# interactive on our previous post. Assuming things are working as intended, the first thing you should notice is the automatic function type signature displaying on top of our function.

The second thing now will be to test the integration with F# interactive. We can do this in several different way but for this post we’ll use the default ‘send line to fsi’ shortcut which is alt+/ .

Now your function is part of the interactive tool context and can be run.

You can find the rest of the options by opening up quick search ( ctrl+shift+p ) and searching for fsi .

On the next and final entry for the series, we’ll finish setting vscode for building, running and debugging our program.

This will be the first part of a series of 3 articles that will help you to get up and running to do F# development with .NET Core on Linux.

Installing .NET Core

The first thing we want to do is to install .NET core. Microsoft provides us with very easy to follow instructions for your favorite distro here.

Once the installation is over you can quickly check that it went well by running the following command.

Installing F#

At this stage we will be able to create, run and build projects but we still don’t have access to the language itself.

For that we will follow the instructions from the F# Foundation located here. One of the biggest benefits of doing so is to have access to the F# interactive tool, which we’ll use now to verify the installation went well by running the following command.

Taking it for a ride

F# Interactive

Let’s use it write a very simple function to see it in action.

Remember that in the context of F# interactive you need to terminate your instructions with  ;;

Dotnet CLI

The first time you fire up  dotnet it will take a few seconds while it bootstraps and prepares its cache.

We’ll create a new console app project which will be the basis for the series.

This will assume you’re running this command inside a folder called  fsharp_tutorial and that will be the name of our project as we won’t be specifying one.

You should now have 2 new files called  fsharp_tutorial.fsproj and  Program.fs .

Like with any other .NET core project type, we’ll need to restore its packages before can try to do anything useful.

Now we’re ready for the moment of truth as we’ll run our program for the first time.

If you see the above it means that your installation went as intended and we’re ready for our next step in this series which will be setting up our development environment.

Migrating a project.json .net core project to the latest csproj format can be a bit of a minefield. There are a few gotchas and even best practices that you likely did when using a project.json based project that will actually make the migration to csproj fail.

But first, why should you not upgrade :

  • Your team is not ready to use Visual Studio 2017 (e.g. no licenses). csproj based projects can only be opened in Visual Studio 2017, they cannot be opened in Visual Studio 2015
  • Your team is using another IDE such as Jetbrains Ryder that does not have support (Or complete support yet) for csproj based projects.

If those aren’t you, then it is really a no brainer to upgrade your project to the latest .net core tooling.

One important thing to note is that tooling is not the same as the .net core version. Tooling refers to things like project.json vs csproj and how the .net core CLI works. It does not refer to the version of .net core (1.0 or 1.1). Updating the tooling does not change the version of the actual .net core runtime!

Preparation

You should go ahead and download the latest .net core SDK. If this is your first time doing this you should first read this article on how to run two versions of the .net core SDK side by side. This is really important if you intend to work on both project.json and csproj projects on the same machine for some time as you could end up not being able to open projects without the right global.json setup.

On the project you wish to migrate, check the root of your project/solution for a global.json file. This file is used to determine which SDK to use but annoyingly can cause havoc when you are trying to migrate. The reason being when you wish to migrate, you want to be able to run the migrate command using the latest tooling, but the global.json may point to an older version. If you try to migrate with this going on you will see the following error message :

Take a backup of the global.json and delete it from the root folder.

Using the Command Line

The command we are going to run from the command line is “dotnet migrate“. If we move our command line to the solution directory and run it, it will recursively move down into our projects and migrate them all to the csproj format.

It really is as simple as that!

Remember to create a global.json file in the root of our directory specifying what tooling version we are now on. This is actually an un-needed step realistically, but if you upgrade your tooling again on your machine you will want this project to be unaffected. It’s also handy when setting up a new developers machine to be able to remember which version of the SDK you actually need!

Rolling Back

There is no CLI command to “migrate down” a .net core project, but you can rollback manually. Inside each project directory you will find a backup folder with your original project.json file. You can pull this out and replace the csproj files with your project.json files again. Remember that after moving back to project.json, you will need to create/update your global.json file with the correct SDK versioning so it is able to be opened by Visual Studio 2015 again.

C# as a language has always been well respected, but convincing developers who traditionally develop on Macs and Linux machines to swap to a windows box was always a tough sell. While there has been Mono, developing on a non windows machine was pretty much always a non starter. With .net core, not only are you able to run your apps cross platform, but the tooling has improved to the point where you can actually develop on a non windows machine pretty painlessly.

One such option is to use VS Code to develop in. VS Code is a mix of “editor” like simplicity (so something like Sublime), but also with the power of Visual Studio. If you are in love with Visual Studio, then it may seem a bit more “restrictive” (certainly VS Code doesn’t carry the bloat of Studio), but it makes up for it in speed and ease of use. What’s also great is that if you have a mix of developers who like developing on Windows, Mac and Linux, you can have a consistent IDE across the team.

Getting Started

First up go ahead and install VS Code

Then you will still need to install the .net Core SDK. The tooling is cross platform, but make sure you download the correct SDK for your machine. It’s possible (Especially on Windows), that you may need a restart of your machine after installing the SDK.

Go ahead and open up VS Code now. On the left hand menu hit the icon labelled extensions or you can press Ctrl + Shift + X. This should open up the extensions menu where you can install additional plugins to VS Code. Remember, VS Code on it’s own is extremely light and is basically a notepad editor. To be able to use intellisense and debugging, we need to install a couple of extras.

On the extensions menu, go ahead and install the C# addon. After hitting the install button you need to also click “reload” which reboots the editor.

Creating A .Net Core Project

VS Code itself doesn’t deal with project templating. If you are used to Visual Studio where you can create new projects within the IDE, this may be a bit painful to get used to. Instead you need to create your projects from the command line. Go ahead and create a folder on your machine where you want to have your work live (In the examples below, I created a folder called “VSCodeExample”, use this if you want to follow along at home). Open a command prompt/bash/terminal, and move to this directory.

First we need to create a “solution” which is essentially going to be our container for holding all our projects. A “solution” file is a bit like a .project file if you are coming from Eclipse. It’s a one stop shop for opening an entire “solution” and knowing all the projects that are part of it. In truth, when using VS Code you actually don’t need a solution file at all, you instead are opening the “folder”. But there are two main reasons for still creating it.

  • If you have other developers that are using Visual Studio, then this makes their lives a hell of a lot easier (Infact, you will need a solution file to make it work at all).
  • Build commands can point to a solution now instead of a project which means that you can build a “set” of projects just by pointing your build at the solution file.

We are going to create a solution file and that’s that!

In your command prompt type :

This creates a new solution file ready to be used. It should also be noted the .sln file is also named the same as it’s parent folder.

Go ahead now and create another folder inside your existing workspace, and call it the name of your desired “project”. In this case, I’m going to go ahead and call mine “VSCodeExample.Web”. Your “solution” directory should now look like this :

OK, now take your command prompt and move into your project folder (VSCodeExample.Web), and type :

This creates a new MVC project that is pretty similar (If not identical), to what Visual Studio would create for you. It basically lays down all the boilerplate code so you can jump right in and start working.

It should be noted that typing “dotnet new -all” will list off all templates that are available to you. In our case we went with an MVC template, but you can do things like Web API templates, Console apps and class libraries.

OK, now remember what I said about being a great neighbour with the solution file? Move your command prompt up a level to your solution directory and type the following :

What this does is add your project into the solution file so that when someone comes along and opens it in Visual Studio, they are right up to speed. Again, it’s not required for working in VS Code but is still a nice habit to get into.

Finally, we need to go ahead and restore some packages that our projects need. In your command prompt from inside the solution folder, run the following :

This restores nuget packages that are missing from your project. If you aren’t sure what the hell nuget is, just follow along for now and it will be explained further later!

Let’s go ahead and open our new baby up in VS Code. In VS Code go File -> Open Folder and select your “solution” folder. The first time you open a new .net core project, You will see the following bar come across the top of your screen.

Pressing yes on this creates a launch.json and a tasks.json file inside your solution. These are used to be able to launch and debug your project from VS Code. You can create these manually and edit them, but it’s easier just to let VS Code handle this for us, so click Yes.

On your tree menu, go into your new web project -> Controllers -> Home Controller and have a play around. You will quickly notice that all the intellisense, syntax highlighting,  and annoying warnings that you know and love from Visual Studio are here.

Running Your Project In VS Code

This is going to be a pretty short section. Simply press F5 in VS Code to have your project launch. Any build errors will need to be resolved before launching your project, but otherwise a new browser window/tab should open with your project running in it. Breakpoints in VS Code all work as per normal, and things such as callstack, watches, output console etc are all there too.

There is however one small bug. If you get the following message “Could not find the prelaunch task build”. The first thing you should do is try and close VS Code and reopen it again. As stupid as this sounds, this happens on new projects all the time and it’s resolved with a quick open and close!

Managing Nuget Packages

Nuget is the package system for .net. If you’ve used NPM, Bower or Composer before, it’s all pretty similar.

Now if you have used Nuget before, in the latest version there are numerous changes like no more packages.config, no more local cache of packages (There is now a global cache per machine), and many other changes. There are too many new things to really put in this article, but it is worth going and having a read on what you can expect.

To add a package to your project, you now have to use the command line (Which isn’t as scary as you might think!). If we wanted to add the “Newtonsoft.Json” package to our project, we would run the following in the project folder :

This will add the reference in your csproj, then run a “restore” on your project to ensure you have the package in your global cache. If you check inside your csproj file for your project, you should see something similar to the following :

You can actually add and remove references to nuget packages directly in the csproj file if you think it’s going to be easier, but remember from the command line you need to run “dotnet restore” to ensure that you have all the packages required. It can be easier when you are trying to consolidate multiple projects to the same nuget package version to just go ahead and edit the project file directly rather than hand typing the dotnet add command in your console.

Adding Project References

In any project larger than a “Hello World”, you are going to be building class libraries that need to be referenced from other projects in your solution Again, another Visual Studio sugar that you are going to miss when moving to VS Code. Luckily there is a dotnet command for that!

The exact command is

Or in our example app. I have added a new project called “VSCodeExample.Library”, and I’m going to add it to my web project. I run the following from my solution directory.

When I open up my VSCodeExample.Web.csproj I now see the following :

And again, you can add this project reference directly in your csproj if you prefer it that way. There is no magic going on behind the scenes when you run the dotnet command, it literally just adds that XML.

Anything Else?

No doubt I’ve missed a couple of things that are specific to your project. The best advice I can give is to do a quick scan on the dotnet command list here. You don’t need to read in depth on every single command, but when you get into a jam it usually gives you a lightbulb moment of “Yeah… I think I remember there was a dotnet command for that…”. And as always, drop a comment below if you get stuck with anything else!