The rumours are true! Microsoft Dynamics NAV 2016 comes with an updated code editor for development that includes many new features: Syntax highlighting UNDO functionality Intellisense style code completion Line
The rumours are true! Microsoft Dynamics NAV 2016 comes with an updated code editor for development that includes many new features:
- Syntax highlighting
- UNDO functionality
- Intellisense style code completion
- Line numbers
- Change indicators
- Syntax tool-tips
- Table definitions from the editor
So much good stuff. It’s been a long time coming and I’m excited to get my hands on a copy to start using it. You can find more details and additional pictures over on Vjeko’s blog where I first saw this.
I finally got the time to sign up for Codingame this week and plan to get into having fun with a few challenges over the weekend. I’ve also started looking at the tool from the opposite end in terms of hiring/acquiring talent and/or as a fun system to promote within our team and foster some experience with new languages. I’ll be sure to share my thoughts in a future post and review of the platform.
Being a new developer (or a seasoned veteran) you are always learning. At your job you are hopefully learning new things all the time.
Source: Always Learning – Ryan Erb
We’ve had a lot of fun stuff in the works over the past couple of months. Ryan Erb has been building out a Jenkin’s based automated build server that is NAV aware to help with our continuous integration processes. Also, we are starting down the road of incorporating a large battery of automated unit tests that I will surely blog about as we progress through the trials and tribulations of that journey.
Long story short, we will be upgrading to NAV 2016 as soon as it’s released and I’m excited to share that they are making THE FULL BATTERY of automated test suites for the standard NAV functionality available to partners. YES!
Can’t wait to get my hands on these…
Some more information found on James Crowter’s blog: Part 8: Dynamics NAV 2016: Engineering – All things Dynamics | James Crowter
Jason Down shared with me a great article that he came across over on kauffmann.nl regarding reading NAV server settings directly from C/AL code. Very useful when you are doing things like generating and importing XMLPorts automatically and so forth. We are going to use this to update some of our practices where we were either hard coding or using other methods to determine this.
Check out the full article here: Read Server Settings from C/AL code
Ever watch G.I. Joe? Knowing is half the battle…
In this post I’ll be covering how you can use XMLPort objects, XML data and source control to track changes to your configuration and setup data in Microsoft Dynamics NAV 2009 R2+. In this particular example we will be using NAV 2013 R2 since that’s what I’ve got locally on my system at this time.
Why would I do this?
Good question. The simple answer is that you would do this if you wanted to have a traceable history of all changes made to the setup and configuration of a NAV DB over the course of various enhancements, customizations and upgrades. Another benefit to this approach is for those looking to do automated builds or automated unit testing. If all of your configuration and setup data is tracked by source control, then you can quite easily add this to a process based on continuous integration such as Jenkins or Bamboo and have new NAV databases spit out each night with the latest setup data and code baked right in. I’ll get to how we are doing continuous integration in another post later on.
Step 1: Prepare Your Starting Point
First things first, you are going to need to do some homework and isolate the tables in your system that hold setup data that you actually care about. This process can be tedious but is well worth it as after you do this once, you won’t have to go back to this level again and only need to review new areas as they are implemented or as upgrades add new setup tables that previously were not configured.
The strategy I recommend here is to divide up your setup/config data into 3 distinct categories.
- Data that is consistent across any implementation.
- Data that differs consistently between implementations.
- Data that differs with each implementation.
For example, if every site that was deployed used the SAME standardized Chart of Accounts this would be a category 1 XMLPort. On the other hand, each site might have a different set of Locations that they utilize that include a standardized site ID as a naming construct (200-STOCK, 200-RETURNS etc.), which would denote this as being a category 2 XMLPort. For category 3, it would be data that is essential to the setup of a system but is really unique for each and every customer site. The only reason you’d store this as an XMLPort is if you had a base template you wanted populated to save you some time on initial setup when you start configuring it for that particular customer.
Why differentiate? So that later on, if you decide to do some fancy PowerShell work on automating database preparation for each of your next 100 customer deployments, you can run import/export processes on these XMLPorts in distinct groups very easily, purely by the object ranges you assign them. More importantly, you can perform some data transformation on the ones that may change consistently with each deployment. Remember our Location example? It’s easy to use PowerShell to do some find/replace work on the data files before importing them.
7000 ==> 7001
7000-RETURNS ==> 7001-RETURNS
Here’s an example of what this looks like within the data file.
Step 2: Generate the XMLPort Objects & Data Files
Now that you’ve identified your list of things that you’d like to extract from the database we can move on to the fun stuff. Building XMLPorts manually can be overly tedious and we are lazy. Since we don’t have an intern we can give this task to we wrote a CodeUnit that can be run which will generate your XMLPort objects for you by simply pointing it at the table you are interested in and giving it the object number to save the new port as. Let’s be honest, if we had an intern they’d probably be smarter than us and would have written an even better way to do this! 😉
Special Note: By default XMLPorts export as UTF-16, or at least on our system they did. In order to get the data files to show up properly with default settings in Kiln, we had to make sure that the XMLPort objects created by XMLPortGen are encoded as UTF-8.
Here’s a download link to it: C50099
Warning: Use at your own risk! Never test in production unless you are wearing a red cape!
Step 3: Generate XML Files
After you’ve got all of the XMLPorts created and imported to your database, now you simply need to generate the XML data files by running each port individually.
Note: Make sure that you save the XML data files into a folder that is being managed by your source control system.
If you are a keener, it wouldn’t take much to write some code that would run all ports given a specific object range. We’ve got a CodeUnit that does this as well. If there is interest I can dig that up and post it here. Just let me know.
So now we’ve got our XML data files exported and sitting in the right spot. Our next step is to commit them to our source code control system. In this example, using Kiln, you can simply right click anywhere in the folder and select Hg Commit…
This brings us to the TortoiseHg Commit window. This is a tool that allows you to work directly with source control. See my series on source control with Dynamics NAV if you want a more in depth overview of the tools being shown here.
In the window on the left, we see a list of all the XML files we just saved to this directory. On the upper right, we have a window where we enter an intelligent message so that when other developers pull this change down onto their systems, they can tell what was done without having to read through the files or view the file diff. Lastly, the window on the bottom right provides the contents of each file for quick reference. Since this is the first time these XML files have been put into source control, there is no difference analysis done.
On subsequent commits as changes are made to the data, you’ll see a concise view of exactly what was changed in each file. Hopefully you are starting to see why this would be exceptionally useful in tracking configuration changes as they are made.
Above is an example of the commit window when making a change to an XML data file that’s in source control. As you can see, the original line and the newly updated line are both clearly shown. This is the type of history you can see over the entire lifetime of changes to any individual file being tracked, no different than source code.
If you were simply updating an existing XML data file you would do the following:
- Modify the setup/config data in NAV.
- Run the XMLPort object that is mapped to the table that stores the modified setup/config data.
- Save/overwrite the existing data file in your folder that is managed by source control.
- Commit the change to source control for the XML data file.
Additional Points to Consider
As you track your setup data with source control, you’ll need to rely on the fact that as an individual developer (or a team) you are thinking about setup data changes when you perform any customization. This is a manual process, so if you update setup and configuration in a table, nothing is going to remind you to dump the XML data file and update the repository accordingly!
To assist with gentle reminders to developers making setup/config changes, we added some intelligence to our in-house source control tool. Basically, if a developer makes a table schema change, it realizes this and validates that the incoming commit includes the table object, XMLPort object and corresponding XML data file. If it doesn’t it warns the developer through the tool that they should double check and make sure that they add these items to the commit if they are in fact required.
Great series from Ryan Erb on integrating Google Analytic’s with Microsoft Dynamics NAV. A nice way to use a proven and mature platform to do some page action usage reporting. Check it out.
Home Hardware Stores Limited adopts Microsoft Dynamics NAV as a platform for in-store operations – MSDynamicsWorld.com
Jason Down, a colleague of mine just had an interview published with MSDynamicsWorld on his work with C# and Dynamics NAV 2013 R2. Way to go Jason! If you haven’t been there already and are interested in learning a ton about C# and NAV, check out his blog!
About four years ago, Home Hardware Stores Limited partnered with LS Retail to develop a Dynamics NAV solution that also gave them an integrated point of sale system. They worked together to create a customized system that Down’s team now manages and builds on, but that LS Retail still collaborates on.
In part 2 of this series on source control I will be doing another overview covering the repository layout and the tools used to work with source control.
I’ll be sticking to the basics in this post so we focus more on what this would look like right out of the box if you were to setup FogBugz and Kiln and immediately start using them both with NAV development with no other tools. Focus will be on how we set things up and the look and feel of each tool. In later posts I will step through simple changes and processes at a more detailed level. There is a lot of background required to get up to speed with source control if you’ve never used it before. For those of you that have, this is likely a boring review with some hopefully interesting pictures that show how we married NAV and source control together effectively.
This post will skip over some initial setup steps like creating a repository and checking it in for the first time for the sake of brevity. Don’t fret as these are all well documented as part of the Kiln product and/or the Mercurial tutorial article that I posted in the first part of this series.
Since I will be using some language that is unfamiliar to those who haven’t used source control systems. Very quickly, I’ll go over some of the most common terms and what they mean.
Repository – This is a collection of files and folders that is being tracked by source control. Local repository refers to the repository that is on your machine. Remote repository refers to a repository sitting on a server somewhere.
Commit – This is what you do when you want to make your source control system aware of the changes you’ve made to a file or set of files.
Push – When you want to send changes from your local repository to a remote repository you do this.
Pull – When you want to pull down changes made by other people that exist in the remote repository down to your local repository you do this.
Changeset – A group of code changes to 1 or more files. This is basically a delta with some extra metadata attached (like a message explaining the change etc.).
Kiln vs. Mercurial – Mercurial is an open source distributed version control system. Kiln is a product that Fog Creek Software built on top of Mercurial to enhance it’s functionality and integrate it with FogBugz.
First, we took the entire set of Dynamics NAV objects and dumped them out of the database into text format. As we all know there are a ton of objects so we needed to place them in some sort of organized structure. We created a folder for each type of object You can store them and name them however you wish.
Note: The .hg folder and corresponding .hgignore and .hgtags files are added as part of Kiln and are automatically generated when items are being “tracked” by source control. These icons don’t show up until you’ve added your items to source control and signify their “state”. Green checkmark = good. Red exclamation mark = changes have been made that aren’t committed yet.
Inside of each folder you can see that we’ve got a number of text objects.
And if we open up one of these objects we can see NAV code.
If you made a change to one of these text files and saved those changes. Kiln would pick that up right away and mark the file as being different. As you can see below, I added some text to the version tag of this object and saved my changes. Notice the icon displayed on the C50002.txt file. If I was to open the file again and remove the text I added and saved my changes again, the icon would go back to being a green check mark within a few seconds.
The TortoiseHg Workbench
The workbench is the tool you use to work with Kiln. This tool allows you to commit your changes as well as push and pull from remote repositories as well.
In the image above what you see is a running history of all the changes we’ve made in this particular repository. Under the Description column we put a message to describe each commit and use special markup “Case 1234:” which attaches the modifications to the source code right back to the business case that was logged for the change.
If you look in the lower right portion of the graphic you’ll see that there is NAV code listed. The lines that have been removed are red and the lines that have been added are green. Kiln keeps track of the changes to each and every single line of code over time.
FogBugz & Kiln Integration
I keep mentioning how our source code changes are linked to our web based case management tool, FogBugz. This all happens as part of the built in integration that FogBugz has with Kiln.
This is a case from FogBugz that has 2 changesets associated with it. You can see this via the Kiln Changesets menu. From here I can drill down even further and explore the exact source code changes that were made very easily. Since we track bugs, features and other development tasks for various projects in FogBugz this simplifies my work when I’m trying to track down what changes we made as part of a bug fix or major feature release when taking support calls or doing investigative work.
That’s it for this post. I want to keep them short and concise. In the next post, I’ll do an end-to-end simple change to a NAV object and commit that change to our repository.
This is an older post but nevertheless a good one for those looking at perhaps creating a build server for NAV development. Ours is currently using only a single instance to run since we are only doing nightly builds right now. Going forward we want to start doing them a little more frequently so this is in the cards for us.
At first glance, some intelligence needs to be built in for dependencies between objects so you need to be smart about how you divide things up.
Neat little work around concept though and thought I’d share!