Featured Posts

<< >>

Advantages of Tracking NAV Setup Data in Source Control

version-control

  In this post I’ll be covering how you can use XMLPort objects, XML data and source control to track changes to your configuration and setup data in Microsoft Dynamics

Dynamics NAV and Google Analytics – Part 1/2 | Ryan Erb

Great series from Ryan Erb on integrating Google Analytic’s with Microsoft Dynamics NAV. A nice way to use a proven and mature platform to do some page action usage reporting.

Home Hardware Stores Limited adopts Microsoft Dynamics NAV as a platform for in-store operations – MSDynamicsWorld.com

Jason Down, a colleague of mine just had an interview published with MSDynamicsWorld on his work with C# and Dynamics NAV 2013 R2. Way to go Jason! If you haven’t

Source Code Control in Dynamics NAV 2013 R2 – Part 2

  In part 2 of this series on source control I will be doing another overview covering the repository layout and the tools used to work with source control. I’ll be sticking to

MSDN Article: How to Compile a Database Twice as Fast (or faster)

This is an older post but nevertheless a good one for those looking at perhaps creating a build server for NAV development. Ours is currently using only a single instance

Advantages of Tracking NAV Setup Data in Source Control

version-control
 

In this post I’ll be covering how you can use XMLPort objects, XML data and source control to track changes to your configuration and setup data in Microsoft Dynamics NAV 2009 R2+. In this particular example we will be using NAV 2013 R2 since that’s what I’ve got locally on my system at this time.

Why would I do this?

Good question. The simple answer is that you would do this if you wanted to have a traceable history of all changes made to the setup and configuration of a NAV DB over the course of various enhancements, customizations and upgrades. Another benefit to this approach is for those looking to do automated builds or automated unit testing. If all of your configuration and setup data is tracked by source control, then you can quite easily add this to a process based on continuous integration such as Jenkins or Bamboo and have new NAV databases spit out each night with the latest setup data and code baked right in. I’ll get to how we are doing continuous integration in another post later on.

Step 1: Prepare Your Starting Point

First things first, you are going to need to do some homework and isolate the tables in your system that hold setup data that you actually care about. This process can be tedious but is well worth it as after you do this once, you won’t have to go back to this level again and only need to review new areas as they are implemented or as upgrades add new setup tables that previously were not configured.

The strategy I recommend here is to divide up your setup/config data into 3 distinct categories.

  1. Data that is consistent across any implementation.
  2. Data that differs consistently between implementations.
  3. Data that differs with each implementation.

For example, if every site that was deployed used the SAME standardized Chart of Accounts this would be a category 1 XMLPort. On the other hand, each site might have a different set of Locations that they utilize that include a standardized site ID as a naming construct (200-STOCK, 200-RETURNS etc.), which would denote this as being a category 2 XMLPort. For category 3, it would be data that is essential to the setup of a system but is really unique for each and every customer site. The only reason you’d store this as an XMLPort is if you had a base template you wanted populated to save you some time on initial setup when you start configuring it for that particular customer.

Why differentiate? So that later on, if you decide to do some fancy PowerShell work on automating database preparation for each of your next 100 customer deployments, you can run import/export processes on these XMLPorts in distinct groups very easily, purely by the object ranges you assign them. More importantly, you can perform some data transformation on the ones that may change consistently with each deployment. Remember our Location example? It’s easy to use PowerShell to do some find/replace work on the data files before importing them.

Example:

7000 ==> 7001

7000-RETURNS ==> 7001-RETURNS

Here’s an example of what this looks like within the data file.

setupdata1

Step 2: Generate the XMLPort Objects & Data Files

Now that you’ve identified your list of things that you’d like to extract from the database we can move on to the fun stuff. Building XMLPorts manually can be overly tedious and we are lazy. Since we don’t have an intern we can give this task to we wrote a CodeUnit that can be run which will generate your XMLPort objects for you by simply pointing it at the table you are interested in and giving it the object number to save the new port as. Let’s be honest, if we had an intern they’d probably be smarter than us and would have written an even better way to do this! 😉

Special Note: By default XMLPorts export as UTF-16, or at least on our system they did. In order to get the data files to show up properly with default settings in Kiln, we had to make sure that the XMLPort objects created by XMLPortGen are encoded as UTF-8.

xmlportgen1

Here’s a download link to it: C50099

Warning: Use at your own risk! Never test in production unless you are wearing a red cape!

Step 3: Generate XML Files

After you’ve got all of the XMLPorts created and imported to your database, now you simply need to generate the XML data files by running each port individually.

xmlportexport1

 

Note: Make sure that you save the XML data files into a folder that is being managed by your source control system.

xmlportexport2

 

If you are a keener, it wouldn’t take much to write some code that would run all ports given a specific object range. We’ve got a CodeUnit that does this as well. If there is interest I can dig that up and post it here. Just let me know.

xmlportexport4

So now we’ve got our XML data files exported and sitting in the right spot. Our next step is to commit them to our source code control system. In this example, using Kiln, you can simply right click anywhere in the folder and select Hg Commit…

xmlportexport5

This brings us to the TortoiseHg Commit window. This is a tool that allows you to work directly with source control. See my series on source control with Dynamics NAV if you want a more in depth overview of the tools being shown here.

xmlportexport6

 

In the window on the left, we see a list of all the XML files we just saved to this directory. On the upper right, we have a window where we enter an intelligent message so that when other developers pull this change down onto their systems, they can tell what was done without having to read through the files or view the file diff. Lastly, the window on the bottom right provides the contents of each file for quick reference. Since this is the first time these XML files have been put into source control, there is no difference analysis done.

On subsequent commits as changes are made to the data, you’ll see a concise view of exactly what was changed in each file. Hopefully you are starting to see why this would be exceptionally useful in tracking configuration changes as they are made.

xmlportexport9

 

Above is an example of the commit window when making a change to an XML data file that’s in source control. As you can see, the original line and the newly updated line are both clearly shown. This is the type of history you can see over the entire lifetime of changes to any individual file being tracked, no different than source code.

If you were simply updating an existing XML data file you would do the following:

  1. Modify the setup/config data in NAV.
  2. Run the XMLPort object that is mapped to the table that stores the modified setup/config data.
  3. Save/overwrite the existing data file in your folder that is managed by source control.
  4. Commit the change to source control for the XML data file.

Additional Points to Consider

As you track your setup data with source control, you’ll need to rely on the fact that as an individual developer (or a team) you are thinking about setup data changes when you perform any customization. This is a manual process, so if you update setup and configuration in a table, nothing is going to remind you to dump the XML data file and update the repository accordingly!

To assist with gentle reminders to developers making setup/config changes, we added some intelligence to our in-house source control tool. Basically, if a developer makes a table schema change, it realizes this and validates that the incoming commit includes the table object, XMLPort object and corresponding XML data file. If it doesn’t it warns the developer through the tool that they should double check and make sure that they add these items to the commit if they are in fact required.

Dynamics NAV and Google Analytics – Part 1/2 | Ryan Erb

Great series from Ryan Erb on integrating Google Analytic’s with Microsoft Dynamics NAV. A nice way to use a proven and mature platform to do some page action usage reporting. Check it out.

Source: Dynamics NAV and Google Analytics – Part 1/2 | Ryan Erb

Home Hardware Stores Limited adopts Microsoft Dynamics NAV as a platform for in-store operations – MSDynamicsWorld.com

Jason Down, a colleague of mine just had an interview published with MSDynamicsWorld on his work with C# and Dynamics NAV 2013 R2. Way to go Jason! If you haven’t been there already and are interested in learning a ton about C# and NAV, check out his blog!

About four years ago, Home Hardware Stores Limited partnered with LS Retail to develop a Dynamics NAV solution that also gave them an integrated point of sale system. They worked together to create a customized system that Down’s team now manages and builds on, but that LS Retail still collaborates on.

Source: Home Hardware Stores Limited adopts Microsoft Dynamics NAV as a platform for in-store operations – MSDynamicsWorld.com

Source Code Control in Dynamics NAV 2013 R2 – Part 2

 

In part 2 of this series on source control I will be doing another overview covering the repository layout and the tools used to work with source control.

I’ll be sticking to the basics in this post so we focus more on what this would look like right out of the box if you were to setup FogBugz and Kiln and immediately start using them both with NAV development with no other tools. Focus will be on how we set things up and the look and feel of each tool. In later posts I will step through simple changes and processes at a more detailed level. There is a lot of background required to get up to speed with source control if you’ve never used it before. For those of you that have, this is likely a boring review with some hopefully interesting pictures that show how we married NAV and source control together effectively.

This post will skip over some initial setup steps like creating a repository and checking it in for the first time for the sake of brevity. Don’t fret as these are all well documented as part of the Kiln product and/or the Mercurial tutorial article that I posted in the first part of this series.

Glossary

Since I will be using some language that is unfamiliar to those who haven’t used source control systems. Very quickly, I’ll go over some of the most common terms and what they mean.

Repository – This is a collection of files and folders that is being tracked by source control. Local repository refers to the repository that is on your machine. Remote repository refers to a repository sitting on a server somewhere.

Commit – This is what you do when you want to make your source control system aware of the changes you’ve made to a file or set of files.

Push – When you want to send changes from your local repository to a remote repository you do this.

Pull – When you want to pull down changes made by other people that exist in the remote repository down to your local repository you do this.

Changeset – A group of code changes to 1 or more files. This is basically a delta with some extra metadata attached (like a message explaining the change etc.).

Kiln vs. Mercurial – Mercurial is an open source distributed version control system. Kiln is a product that Fog Creek Software built on top of Mercurial to enhance it’s functionality and integrate it with FogBugz.

Repository Layout

First, we took the entire set of Dynamics NAV objects and dumped them out of the database into text format. As we all know there are a ton of objects so we needed to place them in some sort of organized structure. We created a folder for each type of object You can store them and name them however you wish.

p2-1-overallrepo

Note: The .hg folder and corresponding .hgignore and .hgtags files are added as part of Kiln and are automatically generated when items are being “tracked” by source control. These icons don’t show up until you’ve added your items to source control and signify their “state”. Green checkmark = good. Red exclamation mark = changes have been made that aren’t committed yet.

Inside of each folder you can see that we’ve got a number of text objects.

p2-2-codeunits

And if we open up one of these objects we can see NAV code.

p2-3-codeunits-3

If you made a change to one of these text files and saved those changes. Kiln would pick that up right away and mark the file as being different. As you can see below, I added some text to the version tag of this object and saved my changes. Notice the icon displayed on the C50002.txt file. If I was to open the file again and remove the text I added and saved my changes again, the icon would go back to being a green check mark within a few seconds.

p2-2-codeunits-chg3

The TortoiseHg Workbench

The workbench is the tool you use to work with Kiln. This tool allows you to commit your changes as well as push and pull from remote repositories as well.

p2-5-tortoise

 

In the image above what you see is a running history of all the changes we’ve made in this particular repository. Under the Description column we put a message to describe each commit and use special markup “Case 1234:” which attaches the modifications to the source code right back to the business case that was logged for the change.

If you look in the lower right portion of the graphic you’ll see that there is NAV code listed. The lines that have been removed are red and the lines that have been added are green. Kiln keeps track of the changes to each and every single line of code over time.

FogBugz & Kiln Integration

I keep mentioning how our source code changes are linked to our web based case management tool, FogBugz. This all happens as part of the built in integration that FogBugz has with Kiln.

p2-6-fb1

This is a case from FogBugz that has 2 changesets associated with it. You can see this via the Kiln Changesets menu. From here I can drill down even further and explore the exact source code changes that were made very easily. Since we track bugs, features and other development tasks for various projects in FogBugz this simplifies my work when I’m trying to track down what changes we made as part of a bug fix or major feature release when taking support calls or doing investigative work.

What’s Next…

That’s it for this post. I want to keep them short and concise. In the next post, I’ll do an end-to-end simple change to a NAV object and commit that change to our repository.

MSDN Article: How to Compile a Database Twice as Fast (or faster)

This is an older post but nevertheless a good one for those looking at perhaps creating a build server for NAV development. Ours is currently using only a single instance to run since we are only doing nightly builds right now. Going forward we want to start doing them a little more frequently so this is in the cards for us.

At first glance, some intelligence needs to be built in for dependencies between objects so you need to be smart about how you divide things up.

Neat little work around concept though and thought I’d share!

How to Compile a Database Twice as Fast (or faster) – Microsoft Dynamics NAV Team Blog – Site Home – MSDN Blogs.

Source Code Control in Dynamics NAV 2013 R2 – Part 1

 

I’ve promised for a long time that I’d get around to writing an article on how our team is leveraging distributed version control with Microsoft Dynamics NAV 2013 R2 and I’m finally getting the time to do this.

We’ve got a NAV development team of 5 individuals and commonly we work on multiple projects covering multiple areas of the system at the same time. We were a C# shop before we got into the NAV world so as a result we had become used to using distributed version control like Git and Mercurial. When we started to learn more about how NAV worked we realized that normally the only source control used by developers was the lock/modify mechanism built right into the NAV development environment. As a result, we decided that we’d figure out a way to use what we were familiar with, even if that meant having to develop some of our own custom tools along the way.

This will be a comprehensive multi-part series where I cover the reasons for us using source control with NAV development, our development model, basic usage of source control with NAV objects and setup data and finally some end-to-end examples using our custom built source control tool.

I’m going to assume you’ve got some background on source control systems. If not, stop over at HgInit and read through the excellent generic tutorial put on by Joel Spolsky for some primer.

Last but not least, the commercially available tools I’m using in the various pictures and posts below include:

  • FogBugz for case management & issue tracking
  • Kiln for source code control
  • TortoiseHg workbench for working with repositories

First, Why Even Bother?

Well, the advantages of using source control are many. First of all, it gives us the ability to track and view the history of our objects and setup data. Yep, I said setup data. Since setup data is text, we dump that out using XML Ports and add it to source control as well. This means not only do we have a history of all changes made to NAV objects but we’ve also got a history of all changes to setup information in the system.

diagram4

This can be especially valuable when doing troubleshooting for customers or when finding some anomalies with setups at a customer site and reverting them back to a known good configuration.

Making this even sweeter, our source control system is linked to our case management system and business requirements system. So we now have the source code changes that were made directly linked to the case logged for the bug or feature which in turn is linked to the business requirements for that particular feature or bug fix.

On top of being able to keep a history of things, we also are able to work on the same areas of the system at the same time more effectively. Nobody has to wait on someone else to modify an object.

What We Had To Build

Unfortunately, what we wanted to accomplish when we set out on the road to source control using NAV didn’t exist. Sure, you could manually export objects as text files using the NAV development environment but that would be incredibly tedious for any sort of large change across multiple objects.

As a result, we built a tool that intelligently interfaces between our source code control system and the NAV database to make our developers lives easier and automate many of the tedious tasks.

diagram5

This tool, called SCM monitors the NAV database for object changes. As objects are changed we use SCM to select and export changes objects from the NAV database and dump them into the appropriate local repository on our system. Also, we use this tool to pull in changes from other developers if more than one team member is working on the same project and code base.

Our Development Model

Each developer works locally. When they start a new project or initiative they create a new database (from a backup of a master build database… more on that later) and create a new branch and local repository. The developer then works on the features and commits locally. At the end of each work session, they push their changes up to the remote repository on the source control server for safe keeping.

This diagram demonstrates how things are setup for a single developer working on a single project.

diagram1

So How Does It All Work Together?

That’s a great question. I’m going to take you through our entire process in the next series of blog posts. We will start a new project and cover the repository structure and layout, make some changes to NAV objects, pull in other developers changes and merge them and last but not least push our changes up for other developers to handle.

Stay tuned and thanks for making it this far!

Installing NAV 2013 and 2013 R2 Side by Side

This post is a summary of a couple different articles and forum posts I’ve read online. Essentially Microsoft Dynamics NAV 2013 and NAV 2013 R2 share some of the same files and thus you have to tweak the installation of 2013 R2 a bit in order to still be able to run 2013 without issues.

To start, you need to make sure that your 2013 R2 installation is higher than build number 35850 (Microsoft KB 2907588 was the original fix/update to allow both installations to co-exist.).

Here’s the short list of the steps you need to take:

  1. Install NAV 2013 + SQL etc. (if you don’t already have it installed)
  2. Install NAV 2013 R2 (build level above 35850).
  3. Run the attached powershell script (NAVRegFix) as administrator.

This worked like a champ for me.

 

Using SQL Server 2012 for NAV 2013+ Backup & Restore

Our environment at the office is comprised of 5 developers working full time on NAV 2013 development. The developers all work locally and we use a combination of FogBugz, Kiln and an in-house tool we’ve built to manage the import/export of NAV objects into our source control tool as we work. I’ll write another article later on about how we are using Kiln with NAV for distributed source control but I can say that so far it’s been a huge success for us and allowed us to work on multiple features sometimes in the same area of the system without developers tromping all over each other.

Anyhow, we have a number of scenarios in the office that require us to quickly backup and restore various NAV databases running under SQL Server 2012. In earlier versions of NAV this would commonly be done via the Classic Client development environment using the NAV backup tools. We’ve found this to be slow and wanted something faster.

Note: As of NAV 2013 R2, the Classic Client method of NAV database backups isn’t even an option any longer!

Enter SQL Server 2012 Backup/Restore.

Pretty standard for people used to working with SQL Server and works just as well for NAV databases but figured I’d share a step-by-step all the same.

Prerequisites: You’re going to need to have SQL Server Management Studio installed on the machine that you’re using for this tutorial. In most cases this should be installed along with SQL Server 2012 but if that option was missed when doing your initial setup, you’ll need to either go back to your installation and modify it to include this toolset or alternatively, if you’re using SQL Server 2012 SP1 or greater, you can simply download the tools directly from the SQL Server 2012 SP1 Express download page. You simply need to select either of the following (depending on your platform):

  • SQLManagementStudio_x64_ENU.exe (64-bit)
  • SQLManagementStudio_x86_ENU.exe (32-bit)

SQL Server Database Backup

Step 1: Start up SQL Server Management Studio

In this example I’m logging in as the SQL Server Administrator account that was setup when I installed SQL Server 2012 since it’s on my development machine. You can also use a Windows account for the authentication method as long as that account has sufficient privileges in SQL Server to perform backup and restore operations on the databases you are working with.

backup1

Step 2: Select a Database for Backup

Expand the databases list and right click on the database you wish to backup. Select Tasks > Backup… from the list of options.

sc003

Step 3: Name Your Backup & Execute

Depending on what type of recovery model you are using, and if you are using a single backup set with multiple backups within it, you may need to adjust the default values on this screen. For this example, we are using the “Full” recovery method as opposed to a Transaction Log or Differential backup type.

backup2

Note: If you want to change the name or location of your backup file, you’ll need to remove the existing destination entry and then add one of your choosing as illustrated below.

backup3

Last but not least simply click “OK” to start the backup process.

sc004

 

That’s it! You’ve done it.

SQL Server Database Restore

Step 1: Choose a Backup to Restore

You need to be logged in to SQL Server Management Studio as a user with sufficient privileges to perform the restore operation on SQL Server. For this example we are again using the administrative user I setup when installing SQL Server on my machine.

 

Step 2: Change Database Options (if required)

Sometimes when you are restoring a database, you may wish to name it something other than what it was originally called. Perhaps you are cloning a customer or development database for some testing purposes or to work on a new project separate from other work. In order to do this you need to slightly adjust a few options on the database before you restore or otherwise you’ll run into conflicts. You should not restore a database on a SQL Server that already has a database with the same name as a general rule.

Choose the Device option, then click the build button “” and navigate to where you’ve stored your SQL database backup.

sc006

 

Once you’ve selected a database backup to restore, now is the time to change the name of it if you need to. If you aren’t already running a database with the same name you can simply click “OK” here to start the restore process.

If you do need to change the name you can do this by changing the Database field under the Destination heading. We’ve used a new name of “Another Database NAV (7-1)” in this example.

backup4

 

Next, you’ll need to click on the Files page and change names of both the Data and Log files so that they are different than what is listed in this backup. If you don’t change these names as well, you’ll get an error when trying to restore the database.

As a general rule I always name these data and log files with the same name as my database.

backup5

 

Now that you’ve changed the name and respective data and log files, you can click “OK” and restore the database. The length of time required to restore the database will vary based on how large the original was and depending on the speed of your machine.

Anyways, hope this helps some folks out. Let me know if you’ve got any questions.

MSDN Source Links

For more thorough Microsoft based information on backup and restore of SQL Server databases, just hit up the MSDN article on this process here.

 

 

Improving NAS performance in NAV 2009 and NAV 2013

Found a good post we’ve implemented over on Greg Kaupp’s blog regarding improving the performance of the NAV Application Server and thus the responsiveness of the client machines. Simply put adjusting the size of the MetadataProviderCacheSize to a number exceeding the total number of objects within NAV (~5000 or so) will greatly enhance the performance of the NAS.

To update this for your instance of NAV you need to modify your CustomSettings.config file which should be located in the following spots:

NAV 2009 – C:\Program Files\Microsoft Dynamics NAV\60\Service\

NAV 2013 – C:\Program Files\Microsoft Dynamics NAV\70\Service\

Once you open the CustomSettings.config file, update the MetadataProviderCacheSize setting as depicted below.

FROM:

Metadata Update 1

TO:

Metadata Update 2

And that’s it! Thanks again to Greg for posting this originally. His blog is loaded with tons of other useful tips and I encourage you to check it out at the source link below.

Read his full post herePerformance Tuning Microsoft Dynamics NAV 2009 and NAV 2013.

Embedded Systems Protection

Spotted this article on the Interwebs today and I’ve got to say the research being done by Ang Cui at Red Balloon Security is pretty impressive.

Something that’s been a growing concern for many over the past few years has been the ever increasing amount of embedded systems we use on a daily basis and the reality that when compromised these devices can both cripple infrastructure and divulge sensitive information.

Embedded systems? You may remember that thing called Stuxnet back in 2010 which was advertised as a “first of it’s kind” type of malware targeting industrial systems utilizing a rootkit with an affinity for PLCs of the Siemens flavour. Right.

Fast forward 2 years. We’ve now got Cisco IP phones on our desks and portable computers sitting in our pockets. Take the trip into our homes and we’ve got everything from PVRs to Media Servers to iDevice docks and full home automation systems. We are using more embedded systems reliant on firmware than ever before and many of these systems are not just “black boxes” even though they may outwardly appear to be so.

Suffice it to say, that it’s not enough for us to keep up with the latest service packs and updates on our computer systems. It’s not enough for us to have a dedicated IDS running on our networks and the latest greatest security appliances of our choice combing through our bytes. Yes, I could keep going and going but I think you get the point. Embedded systems security and the potential gaps it can leave in our overall infrastructure security plan are holes that need to be closed and this is one of the reasons I’m pretty excited to see interesting work on this front hitting the scene.

What Ang is doing at Red Balloon Security is quite impressive stuff. The Symbiote is a protection mechanism that he has developed to defend embedded systems firmware from exploitation through a number of unique and crafty technologies. Essentially it can be injected into the firmware of any embedded system and once there will thwart any attempts to massage or otherwise compromise the integrity of the device firmware. Furthermore, any attempts to modify or alter the Symbiote itself are mitigated through the use of randomization. Pretty neat stuff.

You can get the full low down at the source link below. Certainly worth the read and it’s fantastic seeing this type of work continue to come to fruition.

Source: Meet the Symbiote: The Ironclad, Adaptable Future of Antivirus Protection.