Featured Posts

<< >>

Source Code Control in Dynamics NAV 2013 R2 – Part 2

  In part 2 of this series on source control I will be doing another overview covering the repository layout and the tools used to work with source control. I’ll be sticking to

MSDN Article: How to Compile a Database Twice as Fast (or faster)

This is an older post but nevertheless a good one for those looking at perhaps creating a build server for NAV development. Ours is currently using only a single instance

Source Code Control in Dynamics NAV 2013 R2 – Part 1

  I’ve promised for a long time that I’d get around to writing an article on how our team is leveraging distributed version control with Microsoft Dynamics NAV 2013 R2

Installing NAV 2013 and 2013 R2 Side by Side

This post is a summary of a couple different articles and forum posts I’ve read online. Essentially Microsoft Dynamics NAV 2013 and NAV 2013 R2 share some of the same

Using SQL Server 2012 for NAV 2013+ Backup & Restore

Our environment at the office is comprised of 5 developers working full time on NAV 2013 development. The developers all work locally and we use a combination of FogBugz, Kiln

Source Code Control in Dynamics NAV 2013 R2 – Part 2


In part 2 of this series on source control I will be doing another overview covering the repository layout and the tools used to work with source control.

I’ll be sticking to the basics in this post so we focus more on what this would look like right out of the box if you were to setup FogBugz and Kiln and immediately start using them both with NAV development with no other tools. Focus will be on how we set things up and the look and feel of each tool. In later posts I will step through simple changes and processes at a more detailed level. There is a lot of background required to get up to speed with source control if you’ve never used it before. For those of you that have, this is likely a boring review with some hopefully interesting pictures that show how we married NAV and source control together effectively.

This post will skip over some initial setup steps like creating a repository and checking it in for the first time for the sake of brevity. Don’t fret as these are all well documented as part of the Kiln product and/or the Mercurial tutorial article that I posted in the first part of this series.


Since I will be using some language that is unfamiliar to those who haven’t used source control systems. Very quickly, I’ll go over some of the most common terms and what they mean.

Repository – This is a collection of files and folders that is being tracked by source control. Local repository refers to the repository that is on your machine. Remote repository refers to a repository sitting on a server somewhere.

Commit – This is what you do when you want to make your source control system aware of the changes you’ve made to a file or set of files.

Push – When you want to send changes from your local repository to a remote repository you do this.

Pull – When you want to pull down changes made by other people that exist in the remote repository down to your local repository you do this.

Changeset – A group of code changes to 1 or more files. This is basically a delta with some extra metadata attached (like a message explaining the change etc.).

Kiln vs. Mercurial – Mercurial is an open source distributed version control system. Kiln is a product that Fog Creek Software built on top of Mercurial to enhance it’s functionality and integrate it with FogBugz.

Repository Layout

First, we took the entire set of Dynamics NAV objects and dumped them out of the database into text format. As we all know there are a ton of objects so we needed to place them in some sort of organized structure. We created a folder for each type of object You can store them and name them however you wish.


Note: The .hg folder and corresponding .hgignore and .hgtags files are added as part of Kiln and are automatically generated when items are being “tracked” by source control. These icons don’t show up until you’ve added your items to source control and signify their “state”. Green checkmark = good. Red exclamation mark = changes have been made that aren’t committed yet.

Inside of each folder you can see that we’ve got a number of text objects.


And if we open up one of these objects we can see NAV code.


If you made a change to one of these text files and saved those changes. Kiln would pick that up right away and mark the file as being different. As you can see below, I added some text to the version tag of this object and saved my changes. Notice the icon displayed on the C50002.txt file. If I was to open the file again and remove the text I added and saved my changes again, the icon would go back to being a green check mark within a few seconds.


The TortoiseHg Workbench

The workbench is the tool you use to work with Kiln. This tool allows you to commit your changes as well as push and pull from remote repositories as well.



In the image above what you see is a running history of all the changes we’ve made in this particular repository. Under the Description column we put a message to describe each commit and use special markup “Case 1234:” which attaches the modifications to the source code right back to the business case that was logged for the change.

If you look in the lower right portion of the graphic you’ll see that there is NAV code listed. The lines that have been removed are red and the lines that have been added are green. Kiln keeps track of the changes to each and every single line of code over time.

FogBugz & Kiln Integration

I keep mentioning how our source code changes are linked to our web based case management tool, FogBugz. This all happens as part of the built in integration that FogBugz has with Kiln.


This is a case from FogBugz that has 2 changesets associated with it. You can see this via the Kiln Changesets menu. From here I can drill down even further and explore the exact source code changes that were made very easily. Since we track bugs, features and other development tasks for various projects in FogBugz this simplifies my work when I’m trying to track down what changes we made as part of a bug fix or major feature release when taking support calls or doing investigative work.

What’s Next…

That’s it for this post. I want to keep them short and concise. In the next post, I’ll do an end-to-end simple change to a NAV object and commit that change to our repository.

MSDN Article: How to Compile a Database Twice as Fast (or faster)

This is an older post but nevertheless a good one for those looking at perhaps creating a build server for NAV development. Ours is currently using only a single instance to run since we are only doing nightly builds right now. Going forward we want to start doing them a little more frequently so this is in the cards for us.

At first glance, some intelligence needs to be built in for dependencies between objects so you need to be smart about how you divide things up.

Neat little work around concept though and thought I’d share!

How to Compile a Database Twice as Fast (or faster) – Microsoft Dynamics NAV Team Blog – Site Home – MSDN Blogs.

Source Code Control in Dynamics NAV 2013 R2 – Part 1


I’ve promised for a long time that I’d get around to writing an article on how our team is leveraging distributed version control with Microsoft Dynamics NAV 2013 R2 and I’m finally getting the time to do this.

We’ve got a NAV development team of 5 individuals and commonly we work on multiple projects covering multiple areas of the system at the same time. We were a C# shop before we got into the NAV world so as a result we had become used to using distributed version control like Git and Mercurial. When we started to learn more about how NAV worked we realized that normally the only source control used by developers was the lock/modify mechanism built right into the NAV development environment. As a result, we decided that we’d figure out a way to use what we were familiar with, even if that meant having to develop some of our own custom tools along the way.

This will be a comprehensive multi-part series where I cover the reasons for us using source control with NAV development, our development model, basic usage of source control with NAV objects and setup data and finally some end-to-end examples using our custom built source control tool.

I’m going to assume you’ve got some background on source control systems. If not, stop over at HgInit and read through the excellent generic tutorial put on by Joel Spolsky for some primer.

Last but not least, the commercially available tools I’m using in the various pictures and posts below include:

  • FogBugz for case management & issue tracking
  • Kiln for source code control
  • TortoiseHg workbench for working with repositories

First, Why Even Bother?

Well, the advantages of using source control are many. First of all, it gives us the ability to track and view the history of our objects and setup data. Yep, I said setup data. Since setup data is text, we dump that out using XML Ports and add it to source control as well. This means not only do we have a history of all changes made to NAV objects but we’ve also got a history of all changes to setup information in the system.


This can be especially valuable when doing troubleshooting for customers or when finding some anomalies with setups at a customer site and reverting them back to a known good configuration.

Making this even sweeter, our source control system is linked to our case management system and business requirements system. So we now have the source code changes that were made directly linked to the case logged for the bug or feature which in turn is linked to the business requirements for that particular feature or bug fix.

On top of being able to keep a history of things, we also are able to work on the same areas of the system at the same time more effectively. Nobody has to wait on someone else to modify an object.

What We Had To Build

Unfortunately, what we wanted to accomplish when we set out on the road to source control using NAV didn’t exist. Sure, you could manually export objects as text files using the NAV development environment but that would be incredibly tedious for any sort of large change across multiple objects.

As a result, we built a tool that intelligently interfaces between our source code control system and the NAV database to make our developers lives easier and automate many of the tedious tasks.


This tool, called SCM monitors the NAV database for object changes. As objects are changed we use SCM to select and export changes objects from the NAV database and dump them into the appropriate local repository on our system. Also, we use this tool to pull in changes from other developers if more than one team member is working on the same project and code base.

Our Development Model

Each developer works locally. When they start a new project or initiative they create a new database (from a backup of a master build database… more on that later) and create a new branch and local repository. The developer then works on the features and commits locally. At the end of each work session, they push their changes up to the remote repository on the source control server for safe keeping.

This diagram demonstrates how things are setup for a single developer working on a single project.


So How Does It All Work Together?

That’s a great question. I’m going to take you through our entire process in the next series of blog posts. We will start a new project and cover the repository structure and layout, make some changes to NAV objects, pull in other developers changes and merge them and last but not least push our changes up for other developers to handle.

Stay tuned and thanks for making it this far!

Installing NAV 2013 and 2013 R2 Side by Side

This post is a summary of a couple different articles and forum posts I’ve read online. Essentially Microsoft Dynamics NAV 2013 and NAV 2013 R2 share some of the same files and thus you have to tweak the installation of 2013 R2 a bit in order to still be able to run 2013 without issues.

To start, you need to make sure that your 2013 R2 installation is higher than build number 35850 (Microsoft KB 2907588 was the original fix/update to allow both installations to co-exist.).

Here’s the short list of the steps you need to take:

  1. Install NAV 2013 + SQL etc. (if you don’t already have it installed)
  2. Install NAV 2013 R2 (build level above 35850).
  3. Run the attached powershell script (NAVRegFix) as administrator.

This worked like a champ for me.


Using SQL Server 2012 for NAV 2013+ Backup & Restore

Our environment at the office is comprised of 5 developers working full time on NAV 2013 development. The developers all work locally and we use a combination of FogBugz, Kiln and an in-house tool we’ve built to manage the import/export of NAV objects into our source control tool as we work. I’ll write another article later on about how we are using Kiln with NAV for distributed source control but I can say that so far it’s been a huge success for us and allowed us to work on multiple features sometimes in the same area of the system without developers tromping all over each other.

Anyhow, we have a number of scenarios in the office that require us to quickly backup and restore various NAV databases running under SQL Server 2012. In earlier versions of NAV this would commonly be done via the Classic Client development environment using the NAV backup tools. We’ve found this to be slow and wanted something faster.

Note: As of NAV 2013 R2, the Classic Client method of NAV database backups isn’t even an option any longer!

Enter SQL Server 2012 Backup/Restore.

Pretty standard for people used to working with SQL Server and works just as well for NAV databases but figured I’d share a step-by-step all the same.

Prerequisites: You’re going to need to have SQL Server Management Studio installed on the machine that you’re using for this tutorial. In most cases this should be installed along with SQL Server 2012 but if that option was missed when doing your initial setup, you’ll need to either go back to your installation and modify it to include this toolset or alternatively, if you’re using SQL Server 2012 SP1 or greater, you can simply download the tools directly from the SQL Server 2012 SP1 Express download page. You simply need to select either of the following (depending on your platform):

  • SQLManagementStudio_x64_ENU.exe (64-bit)
  • SQLManagementStudio_x86_ENU.exe (32-bit)

SQL Server Database Backup

Step 1: Start up SQL Server Management Studio

In this example I’m logging in as the SQL Server Administrator account that was setup when I installed SQL Server 2012 since it’s on my development machine. You can also use a Windows account for the authentication method as long as that account has sufficient privileges in SQL Server to perform backup and restore operations on the databases you are working with.


Step 2: Select a Database for Backup

Expand the databases list and right click on the database you wish to backup. Select Tasks > Backup… from the list of options.


Step 3: Name Your Backup & Execute

Depending on what type of recovery model you are using, and if you are using a single backup set with multiple backups within it, you may need to adjust the default values on this screen. For this example, we are using the “Full” recovery method as opposed to a Transaction Log or Differential backup type.


Note: If you want to change the name or location of your backup file, you’ll need to remove the existing destination entry and then add one of your choosing as illustrated below.


Last but not least simply click “OK” to start the backup process.



That’s it! You’ve done it.

SQL Server Database Restore

Step 1: Choose a Backup to Restore

You need to be logged in to SQL Server Management Studio as a user with sufficient privileges to perform the restore operation on SQL Server. For this example we are again using the administrative user I setup when installing SQL Server on my machine.


Step 2: Change Database Options (if required)

Sometimes when you are restoring a database, you may wish to name it something other than what it was originally called. Perhaps you are cloning a customer or development database for some testing purposes or to work on a new project separate from other work. In order to do this you need to slightly adjust a few options on the database before you restore or otherwise you’ll run into conflicts. You should not restore a database on a SQL Server that already has a database with the same name as a general rule.

Choose the Device option, then click the build button “” and navigate to where you’ve stored your SQL database backup.



Once you’ve selected a database backup to restore, now is the time to change the name of it if you need to. If you aren’t already running a database with the same name you can simply click “OK” here to start the restore process.

If you do need to change the name you can do this by changing the Database field under the Destination heading. We’ve used a new name of “Another Database NAV (7-1)” in this example.



Next, you’ll need to click on the Files page and change names of both the Data and Log files so that they are different than what is listed in this backup. If you don’t change these names as well, you’ll get an error when trying to restore the database.

As a general rule I always name these data and log files with the same name as my database.



Now that you’ve changed the name and respective data and log files, you can click “OK” and restore the database. The length of time required to restore the database will vary based on how large the original was and depending on the speed of your machine.

Anyways, hope this helps some folks out. Let me know if you’ve got any questions.

MSDN Source Links

For more thorough Microsoft based information on backup and restore of SQL Server databases, just hit up the MSDN article on this process here.



Improving NAS performance in NAV 2009 and NAV 2013

Found a good post we’ve implemented over on Greg Kaupp’s blog regarding improving the performance of the NAV Application Server and thus the responsiveness of the client machines. Simply put adjusting the size of the MetadataProviderCacheSize to a number exceeding the total number of objects within NAV (~5000 or so) will greatly enhance the performance of the NAS.

To update this for your instance of NAV you need to modify your CustomSettings.config file which should be located in the following spots:

NAV 2009 – C:\Program Files\Microsoft Dynamics NAV\60\Service\

NAV 2013 – C:\Program Files\Microsoft Dynamics NAV\70\Service\

Once you open the CustomSettings.config file, update the MetadataProviderCacheSize setting as depicted below.


Metadata Update 1


Metadata Update 2

And that’s it! Thanks again to Greg for posting this originally. His blog is loaded with tons of other useful tips and I encourage you to check it out at the source link below.

Read his full post herePerformance Tuning Microsoft Dynamics NAV 2009 and NAV 2013.

Embedded Systems Protection

Spotted this article on the Interwebs today and I’ve got to say the research being done by Ang Cui at Red Balloon Security is pretty impressive.

Something that’s been a growing concern for many over the past few years has been the ever increasing amount of embedded systems we use on a daily basis and the reality that when compromised these devices can both cripple infrastructure and divulge sensitive information.

Embedded systems? You may remember that thing called Stuxnet back in 2010 which was advertised as a “first of it’s kind” type of malware targeting industrial systems utilizing a rootkit with an affinity for PLCs of the Siemens flavour. Right.

Fast forward 2 years. We’ve now got Cisco IP phones on our desks and portable computers sitting in our pockets. Take the trip into our homes and we’ve got everything from PVRs to Media Servers to iDevice docks and full home automation systems. We are using more embedded systems reliant on firmware than ever before and many of these systems are not just “black boxes” even though they may outwardly appear to be so.

Suffice it to say, that it’s not enough for us to keep up with the latest service packs and updates on our computer systems. It’s not enough for us to have a dedicated IDS running on our networks and the latest greatest security appliances of our choice combing through our bytes. Yes, I could keep going and going but I think you get the point. Embedded systems security and the potential gaps it can leave in our overall infrastructure security plan are holes that need to be closed and this is one of the reasons I’m pretty excited to see interesting work on this front hitting the scene.

What Ang is doing at Red Balloon Security is quite impressive stuff. The Symbiote is a protection mechanism that he has developed to defend embedded systems firmware from exploitation through a number of unique and crafty technologies. Essentially it can be injected into the firmware of any embedded system and once there will thwart any attempts to massage or otherwise compromise the integrity of the device firmware. Furthermore, any attempts to modify or alter the Symbiote itself are mitigated through the use of randomization. Pretty neat stuff.

You can get the full low down at the source link below. Certainly worth the read and it’s fantastic seeing this type of work continue to come to fruition.

Source: Meet the Symbiote: The Ironclad, Adaptable Future of Antivirus Protection.

Pentesting with Backtrack – OSCP

As it turns out, I’m lucky enough that within the scope of my role at my current employer they want to further improve our professional certifications in the security field. Even better, they were quite supportive of my recommendations as to which particular program I’d like to take.

As such, I will be taking a course I’ve long followed but never tackled on my own, the Pentesting with Backtrack course from Offensive Security. This ultimately garners you the Offensive Security Certified Professional (OSCP) should you complete all challenges in a satisfactory manner.

Though more “n00b” than simply learning it all on your own (and certainly – you can) this particular course interests me based on the real world practicality in how they deliver the course and measure your progress. Also, companies like certifications and credentials and they certainly don’t hurt your resume.

Getting back to the practical real world nature of this couse, the exam at the end is actually a full scale network that you are given 24 hours to break into as far as you possibly can. Your abilities are thus judged based on how well you can problem solve and use the tools you’ve been given or can find/build yourself. It’s more important to learn how to think correctly in these situations and as they emphasize regularly, “Try Harder”. Now _that_ I can do.

I’m looking forward to this program and though much of what I’ve been doing over the past few years has been based on knowledge I’ve gained on my own, through experience, this certainly seems to be a very progressive approach that I’m looking forward to taking part in.

I’ll be sure to share my experiences (not course material!) here as things progress.

If you are interested in some other reviews of these programs you can find a particularly good one up on g0tm1lk’s blog.


Who are you and what did you just search for?

Most of the major search engines used by millions of people daily store more information about you than you might at first think. In fact, some of the data retention policies (that include personally identifiable information like your IP Address) extend to 18 months and beyond. To anyone who uses GMail, this is nothing new to you because you are already used to it.

I came across a great multi-part article today at a blog I discovered while browsing one of my commonly frequented security forums.

The article is a great exploration of the top 5 major search engines, what information they collect about you and some comparative alternatives you can use to circumvent this behaviour if you so choose.

Physicists May Have Evidence Universe Is A Computer Simulation

I was browsing the web and came across this article that states Physicists May Have Evidence Universe Is A Computer Simulation.

Wait, what? Like the Matrix?

Sort of.

Anyhow, just for fun I thought I’d throw this link up here.