Building smart and self-configurable Windows Services

Hi Community,

This post is about a question that many of us as architects and developers tend to ask ourselves, how can we build reliable yet flexible software that has the ability to adapt itself on the fly based on configuration changes? There are a few approaches to this, but Today I’d like to share a recent implementation I had to architect and pretty much build for one of my clients. My customer’s requirements comprised a web front-end built on ASP.NET MVC in conjunction with JQuery and JavaScript, since my customer did not have any Web Services and due to the delivery nature of this solution as an MVP, the orchestration and integration with backend (Database and Tableau) was responsibility of a Windows Service (Daemon) that can operate in two modes: 1) File operations manager and 2) “message” broker. At the same time, the daemon must be able to identify changes made to its configuration, stop itself, apply changes and restart itself all of this without any user interaction, this sounds cool, right?

In order to understand, how we can accomplish this it’s important to describe the foundational aspects of it or what is going to drive this behaviour. In this case, the daemon’s configuration.

<?xml version="1.0" encoding="utf-8" ?>

<ExecutionDaemon FileDelimiter="~" xmlns="">

    <BrokerRole LocalFolderPath="" />

    <FileManagerRole LocalFolderPath="" RemoteFolderPath="" FileExtensionToBeProcessed="*.csv;*.xml"/>

    <ExternalToolPostExecution ImagePath="" Arguments="" DeleteFileOnCompletion="true" />


To support this functionality it was required to create a few classes that inherit from CustomConfiguration and ConfigurationElement. Config files can be written to/read from at runtime, but I needed to have something in place to handled this event and so there’s FileSystemWatcher. The code responsible for initializing the daemon, reload and apply changes and reconfiguration are listed below

/// <summary>

/// Initializes a new instance of the <see cref="ExecutionDaemon" /> class.

/// </summary>

/// <param name="args">The arguments.</param>

public ExecutionDaemon(string[] args) {

    _startArguments = args;

    ServiceName = Constants.ServiceName;

    _components = new System.ComponentModel.Container();

    Bootstrapper.Run(InitializeTypeContainer); // Take care of IoC plumbing and similar

    _configurationReader = TypeContainer.Resolve<IConfigurationReader>();

    var svcHomeDir = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);

    _configFileMonitor = new FileSystemWatcher(svcHomeDir) {

        EnableRaisingEvents = true, Filter = Constants.ConfigFileExtension,

        NotifyFilter = NotifyFilters.FileName | NotifyFilters.Size


    _configFileMonitor.Changed += (s, e) => ReloadAndApplyConfigChanges();



/// <summary>

/// Reloads the and apply configuration changes.

/// </summary>

private void ReloadAndApplyConfigChanges() {








/// <summary>

/// Configures this instance.

/// </summary>

private void Configure() {

    try {


        _isFileManagerMode = _startArguments?.Contains(Constants.FileManagerMode);

        var config = (CustomConfigSectionReader)_configurationReader.Configuration;

        _mainWatcher = new FileSystemWatcher(_isFileManagerMode.HasValue && _isFileManagerMode.Value ?

                                             config.FileManagerRole.LocalFolderPath :

                                             config.BrokerRole.LocalFolderPath) {

            EnableRaisingEvents = true,

            NotifyFilter = NotifyFilters.FileName | NotifyFilters.Size


        _mainWatcher.Changed += (s, e) => ProcessFileRequest(s, e);

    } catch (Exception ex) {






it’s important to note the fact that I use NotifyFilter property, reason being is that FileSystemWatcher might “misfire” the Changed event, and in order to prevent this from happening we instruct the FileSystemWatcher to fire events only when (NotifyFilters.FileName | NotifyFilters.Size) have changed. If we make changes to the config file while the service is running, Eventvwr will inform of such changes (assuming Eventvwr had been cleared in advanced)




image image


As mentioned before, this Windows Service or Daemon has got dual execution mode, which means same executable but depending on the arguments it behaves and operates differently. The values entered in service property page (Start parameters) are not persisted, so I’ve got no idea why having that in the UI if it doesn’t do anything




but anyways we can set this parameters and make them “persistable” if we add them to the Registry in the following key – HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ExecutionDaemon




and voila! We have our dual operation Windows service deployed on two different servers and customers don’t have to deploy or start/stop anything but only make changes in the configuration file.





LightIoC– Another lightweight IoC Library

Hi Community,

Today’s post is about an IoC library I wrote back in 2013. I called it “LightIoc” because it’s self-contained in a single assembly file. Over the years I have used a few IoC libraries some of them are easy to use, others a bit bulkier or with more dependant assemblies but for this one I added a few custom features as mentioned below:

  • Pre-jitting of assemblies to improve start-up performance. This is configuration driven
  • Discovery of assemblies at start-up
  • Configuration and registration of types via code or config files
  • Lifespan of objects, in other words, objects that are required to do something then are self-disposed

// Sample config file//





        <section name="LightIoC" allowLocation="false" allowDefinition="Everywhere"

                         type="LightIoC.Configuration.ConfigReader, LightIoC, Version=, Culture=neutral, PublicKeyToken=null"/>



        <Pre-Jitting enabled="true"/>


            <register name="IDispatchMessageInspector" />

            <register name="ITestLib" type="WebAPI.Core.Interfaces.ITestLib" assemblyFQN="WebAPI.Core.Interfaces, Version=, Culture=neutral, PublicKeyToken=23e52edaf0ea7bd4"  />

            <register name="IWebApiLogger" type="WebAPI.Core.Interfaces.IWebApiLogger" assemblyFQN="WebAPI.Core.Interfaces, Version=, Culture=neutral, PublicKeyToken=23e52edaf0ea7bd4"  />

            <register name="IWebApiPerfCounter" type="WebAPI.Core.Interfaces.IWebApiLogger" assemblyFQN="WebAPI.Core.Interfaces, Version=, Culture=neutral, PublicKeyToken=23e52edaf0ea7bd4"  />



            <map abstraction="IWebApiLogger" to="DefaultLogger" instanceRequired="true" />

            <map abstraction="IWebApiPerfCounter" to="WebApiPerfCounter" instanceRequired="true" />

            <map abstraction="IDispatchMessageInspector" to="MessageInspector" instanceRequired="true" />

            <map abstraction="ITestLib" to="ThisIsADestroyableClass" instanceRequired="true" lifeSpan="300" />




Alternatively I could have also registered my type container in code, as shown. Type registration can take a Type as such or also the name of the type as string, hence validation occurs at initialization of the container.

public void InitializeTypeContainer<T>() where T: System.ComponentModel.Component {

            if (!TypeContainerInitialized) {



                TypeContainer.Current.RegisterType<ILogger>("Logger", true);




                TypeContainer.Current.RegisterType<IPerfCounters>("PerfCounters", true);


                TypeContainer.Current.RegisterType<IServiceInformation>("CustomServiceBase", true);

                TypeContainer.Current.RegisterType<IBaseService<T>>(typeof(T).Name, true);





I hope you find this useful. Source code is available here





Binary Palindrome check in C#

Hi Community,

This post is about how to check whether the binary representation of a byte is a palindrome. I’ve recently received an email by a fellow developer requesting some help with this, and after having done a bit of research I couldn’t find any good example written in C#, so I decided to do it and share it with you. Similar to any other programming task and faithful to the phrase “There’s more than a way to skin a cat”, there are at least a couple of ways to do this via:



For the sake of clarity and completeness, I have written a code snippet to demonstrate both approaches, and they were implemented as extension methods for byte type as shown below


public static class Binary {

    /// <summary>

    /// Gets the bits.

    /// </summary>

    /// <param name="a">a.</param>

    /// <returns></returns>

    public static string GetBits(this byte a) {

        var retval = string.Empty;


        if (a > 0) {

            var buffer = new StringBuilder();

            var bitArray = new BitArray(new[] { a });


            for (var index = bitArray.Count - 1; index >= 0; index--)

                buffer.Append(bitArray[index] ? "1" : "0");


            retval = buffer.ToString().Substring(buffer.ToString().IndexOf("1", StringComparison.OrdinalIgnoreCase));




        return retval;




    /// <summary>

    /// Determines whether [is palindrome with string op].

    /// </summary>

    /// <param name="a">a.</param>

    /// <returns></returns>

    public static bool IsPalindromeWithStringOp(this byte a) {

        var retval = false;

        var bitStr = GetBits(a);


        if (!string.IsNullOrEmpty(bitStr)) {

            var reversed = new string(bitStr.Reverse().ToArray());

            retval = reversed == bitStr;



        return retval;



    /// <summary>

    /// Determines whether [is palindrome with bit ops].

    /// </summary>

    /// <param name="a">a.</param>

    /// <returns></returns>

    public static bool IsPalindromeWithBitOps(this byte a) {

        var retval = false;


        if (a > 0) {

            var leftShift = 0;

            var tempValue = (int)a;


            while (tempValue != 0) {

                leftShift = leftShift << 1;

                var bitCheck = tempValue & 1;

                tempValue = tempValue >> 1;

                leftShift = leftShift | bitCheck;


            retval = (leftShift ^ a) == 0;



        return retval;



Both methods are invoked

var values = new byte[] { 9, 98, 17 };


Array.ForEach(values, b => Console.WriteLine($"Number {b} has the following bits {b.GetBits()} - IsPalindromeWithStringOp:{b.IsPalindromeWithStringOp()} |IsPalindromeWithBitOps:{b.IsPalindromeWithBitOps()} "));




to check they return the same expected result






Remove unwanted HTTP response headers and enable HSTS on IIS

Hi Community,

I don’t consider myself a security specialist (or specialist at anything for that matter but a generalist instead). I am currently architecting a Web solution for one of my clients, and they came to me with a requirement “review, assess and rectify security vulnerabilities” on an existing Web application. it’s not what I often do, and the new application will integrate into the existing, so I agreed on helping them with this requirement and in doing so this will help me understand the existing solution.  The vulnerabilities I had to address were:

  • Remove common IIS/ASP.NET headers
  • Enable HTTP Strict Transport Security (HSTS)

In order to get started, I needed to download the “URL Rewrite” module for IIS, then create a few outbound rules. The resulting web.config file were then checked in to TFS and ready to be used when deploying to a different environment (e.g.: UAT). Below the rules created in IIS, as well as the generated config file with these rules




<?xml version="1.0" encoding="UTF-8"?>





        <rule name="Add Strict-Transport-Security when HTTPS" enabled="true">

          <match serverVariable="RESPONSE_Strict_Transport_Security" pattern=".*" />


            <add input="{HTTPS}" pattern="on" ignoreCase="true" />


          <action type="Rewrite" value="max-age=31536000" />


        <rule name="Remove Server">

          <match serverVariable="RESPONSE_SERVER" pattern=".*" />

          <action type="Rewrite" value="IIS" />


        <rule name="Remove X-Powered-By">

          <match serverVariable="RESPONSE_X-POWERED-BY" pattern=".*" />

          <action type="Rewrite" />


        <rule name="Remove ASPNET Version">

          <match serverVariable="RESPONSE_X-ASPNET-VERSION" pattern=".*" />

          <action type="Rewrite" />


        <rule name="Remove ASPNET MVC Version">

          <match serverVariable="RESPONSE_X-ASPNETMVC-VERSION" pattern=".*" />

          <action type="Rewrite" />







An interesting article on the subject can be found here on MSDN





How to properly sign-out users when session times out on an MVC app using ADFS as authentication mechanism

Hi Community,

Today’s post is about a common issue faced by many Web developers when they build an MVC Web application that uses ADFS as its authentication mechanism. The problem lies that sessions might be abandoned by IIS when their time is up, but the MVC application might not even be aware of this fact, therefore, by requesting the same page or navigating to another page IIS will re-create a session but this might represent a security flaw or risk because users are not being redirected to the login page to re-enter their credentials.

ASP.NET and all its features (Web Forms or MVC) are tightly coupled to IIS, and in most cases and before this “Federation” era we are currently in, this was taken care of by leveraging “Form-based Authentication” (FBA), but as I’ve previously mentioned there is a new player in this picture, and that is ADFS.


ADFS (stands for Active Directory Federation Services) and it’s a software component developed by Microsoft that can be installed on Windows Server operating systems to provide users with single sign-on access to systems and applications located across organizational boundaries. ADFS uses and relies on claims-based access (CBA) to enforce and maintain application security.




By implementing ADFS, the standard ASP.NET FBA is ignored by delegating its task to ADFS. Everything else remains the same, like session management in this case we’re assuming it is “InProc”.

The security issue arises when session times out but users are never prompted to re-enter their credentials, in order to make this solution work we must then store a tiny value in a session variable. Remember, MVC shares a lot of functionality with Web Forms and even when storing information in the Session object might cause more problems than resolving issues, it’s always a good practice avoid storing much information in it (regardless of whether it’s a Web Form or MVC application).

We just store a very simple value to the recently created session – Expiration time.


/* Global.asax */



/// <summary>

/// Handles the Start event of the Session control.

/// </summary>

/// <param name="sender">The source of the event.</param>

/// <param name="e">The <see cref="EventArgs"/> instance containing the event data.</param>

protected void Session_Start(object sender, EventArgs e) {

    var currentTime = DateTime.Now;

    var timeOut = Session.Timeout;

    Session["_Expiration_"] = currentTime.AddMinutes(timeOut);



/// <summary>

/// Handles the End event of the Session control.

/// </summary>

/// <param name="sender">The source of the event.</param>

/// <param name="e">The <see cref="EventArgs"/> instance containing the event data.</param>

protected void Session_End(object sender, EventArgs e) {




ASP.NET MVC provides a flexible yet powerful mechanism that allows developers to decorate their controllers and the actions they can do. By implementing this custom action filter, and decorating the “BaseController” or any controller we can ensure that it’ll be executed before any method within the controller.


/* SessionExpiration.cs */



/// <summary>

/// Filter responsible for signing out user if sessio has expired

/// </summary>

/// <seealso cref="System.Web.Mvc.ActionFilterAttribute" />

public class SessionExpiration : ActionFilterAttribute {

    /// <summary>

    /// Called when [action executing].

    /// </summary>

    /// <param name="filterContext">The filter context.</param>

    public override void OnActionExecuting(ActionExecutingContext filterContext) {

        var ctx = HttpContext.Current;

        var replyUrl = ConfigurationManager.AppSettings["SignOutReply"];

        var encodedReply = WebUtility.HtmlEncode(replyUrl);

        var signoutUrl = ConfigurationManager.AppSettings["FederatedSignOutUrl"];

        var signOut = $"{signoutUrl}?wa=wsignout1.0&wreply={encodedReply}";


        if (ctx.Session != null) {

            // check if a new session id was generated

            if (ctx.Session.IsNewSession) {

                // If it's a new session, but an existing cookie exists, then it must have timed out hence it's redirected to signout page

                var sessionCookie = ctx.Request.Headers["Cookie"];

                if (!string.IsNullOrEmpty(sessionCookie) && 

                    (sessionCookie.IndexOf("ASP.NET_SessionId", StringComparison.InvariantCultureIgnoreCase) >= 0))








In order to wire-up our custom action filter, we must register it by adding it to the GlobalFilterCollection, otherwise it won’t run.


/* FilterConfig.cs */



public class FilterConfig {

       /// <summary>

       /// Registers the global filters.

       /// </summary>

       /// <param name="filters">The filters.</param>

       public static void RegisterGlobalFilters(GlobalFilterCollection filters) {

           filters.Add(new SessionExpiration());




And that’s pretty much it. If session times out and user tries to refresh the page or go to any other page is taken back to the ADFS logon page so they can re-enter their credentials. We could have also made something fancier by adding client side code and accomplish the same thing using AJAX, but it’s not the intent or scope of this post.





Internals of C# 6 new features

Hi community,

it has been a very busy start of the year for me, but here I am as usual sharing with you information that you might find useful. Today’s post is about the internals of a couple new features available in C# 6. The new version of the language introduces the following features and more information can be found on Roslyn site on Codeplex:

  • Compiler as a Service (Roslyn)
  • Import of static type members into namespace
  • Exception filters
  • Await in catch/finally blocks
  • Auto property initializers
  • Default values for getter-only properties
  • Expression-bodied members
  • Null propagator (Succinct null checking)
  • String interpolation
  • nameof operator
  • Dictionary initializer


On this post, however, I will talk about string interpolation and null propagation. Let’s get started with string interpolation which can be defined as “In computer programming, string interpolation or variable interpolation (also variable substitution or variable expansion) is the process of evaluating a string literal containing one or more placeholders, yielding a result in which the placeholders are replaced with their corresponding values”. This has been around for a long time now in C/C++ and almost any language but always through explicitly invoking a function and passing in the expected parameters, a good example of this would be the printf function. To illustrate this better the following code snippet in C++ and its output (something similar can be achieved with std:: ostream:: operator << which is not in scope for this article).


#include "stdafx.h"

#include "stdio.h"

#include "conio.h"

int main() {

    for (auto start = 'a'; start < 123; start++)

        printf("Selected ASCII character %c in uppercase is %c (code: %i)\n", start, start - 32, start);


    return 0;




In C# we usually accomplish the same thing by calling the string.Format function, as shown below

Enumerable.Range(0, 7).Select(p => p * 1).ToList().ForEach(q => Trace.WriteLine(string.Format("{0} is {1}{2} day of the week",

              new object[] { (DayOfWeek)q, q + 1, (q == 0 ? "st" : q == 1 ? "nd" : q == 2 ? "rd" : "th") })));


And the output is




So what happens when we use string interpolation in our C# code? To illustrate this, we will refactor our previous code snippet to use “interpolation”


Enumerable.Range(0, 7).Select(p => p * 1).ToList()

    .ForEach(q => Trace.WriteLine($"{(DayOfWeek)q} is {q + 1}{(q == 0 ? "st" : q == 1 ? "nd" : q == 2 ? "rd" : "th")} day of the week"));


The output is the same, but you must be wondering why? The answer to that question can be found in the generated MSIL (depicted below). Yes, the same function is called so by prefixing a string with $ (dollar sign) we indicate the C# compiler that string needs to be interpolated, but in reality it’s a wrapper to call the same function (string.Format).




Now, let’s look at the second feature to discuss on this article – “null propagation”. If you’ve been developing software with .NET (or with any other language that uses pointers or passes information by reference… Yes, everything in .NET is a pointer) pretty much)  it’s very likely that you might have seen the pesky “null reference exception”  (C0000005 – Access Violation) is nothing but a memory that cannot be read/written (e.g.: An object that has not been yet initialized). In the past before, C# 6 we had to check for nullability (if object != null then do something). Those days are in the past though, with the new null-propagation operator “we’re covered”, let’s take the following code snippet for example


public partial class Form1 : Form {

        protected Invoice CurrentInvoice {

            get; set;


        public Form1() {


            CurrentInvoice = new Invoice();


        private void button3_Click(object sender, EventArgs e) {

            CurrentInvoice.AddItem(Guid.NewGuid().ToString(), 10);


        private void Form1_Load(object sender, EventArgs e) {

            CurrentInvoice.OnReachedThreshold += (a, b) => listBox1.Items.Add(((Invoice)a).InvoiceId);



public class Invoice {

        public string InvoiceId {

            get; set;


        public int ItemCountThreshold {

            get; private set;


        protected Dictionary<Guid, KeyValuePair<string, decimal>> Items {


            private set;


        public decimal Total {

            get {

                return Items.Values.Sum(p => p.Value);



        public event EventHandler OnReachedThreshold;

        public void AddItem(string description, decimal value) {

            Items.Add(Guid.NewGuid(), new KeyValuePair<string, decimal>(description, value));

            if (Items.Count == 5)

                OnReachedThreshold?.Invoke(this, new EventArgs()); // Using null-propagation operator instead of checking for nullability myself


        public Invoice() {

            InvoiceId = $"Invoice:{Guid.NewGuid()} - Created:{DateTime.Now}";

            Items = new Dictionary<Guid, KeyValuePair<string, decimal>>();


        public Invoice(int threshold = 5) : this() {

            ItemCountThreshold = threshold;




There’s an event that fires to notify the UI about a threshold being reached, if this event didn’t have any object subscribed it will definitely throw a NullReferenceException because the delegate hasn’t been set, but C# 6.0 once again makes it easy for developers and takes care of that, as shown in the following  MSIL (you can find the codes here)



The nullability check is taken care of by the compiler, but once again the generated MSIL is pretty much the same as if we had done the check ourselves.


Happy coding,



NDepend 6.0

NDepend 6.0 was released a few months ago, and due to my workload I have been unable to blog about it until now. This version contains a ton of new features. I have been working with this product since 2009, I think (Sorry but I cannot remember the version back then) and I must say that with every new version the product gets better. It has been two and a bit years since version 5 came out, so many of you must be wondering what’s new in this version?

The first new feature you’ll notice is right there on the welcome screen and the additional integrations they have added to the product. It used to be just “Install Visual Studio Extension” that has been available since version 3, but now it integrates into TFS, TeamCity, SonarQube and Reflector.




NDepend is a very powerful tool and if you haven’t used it before, fear not. It provides a pseudo-wizard that allows developers to select what to do with the source code that’s been analyzed.


NDepend beginner dialog


The resulting report (post-analysis) has improved a lot by adding “how to fix” information to rule failures.




They have also added support for async/await methods, specially in the code coverage analysis.  The metrics view now can be resized and even add this kind of “heat map” color style to it.



Rules now can be exported and share and I must say it’s a pretty handy feature when there are more developers doing code analysis. This feature provides the flexibility of creating a rule file outside the project file.




Once you have created a custom rules file, it can be referenced from your project. It is preferable to use relative paths so it works the same on your development machine and the build server.




In summary, it’s a great tool and must have for every .NET developer’s toolbox. It keeps getting better and better. At the same time, if you need to analyze C++ code they also have CppDepend which is awesome as well. I must publicly thank Patrick for always providing me with a personal license.



Microsoft MVP Award recipient for 2016

Hi community,

It gives me a great joy to announce that Microsoft has awarded me with an MVP recognition once again, in this opportunity in the “Visual Studio and Development Technologies” category . Since the MVP Program has been around for more than two decades, Microsoft have recently made changes to it, therefore, I can contribute to anything that’s related to Visual Studio and not only Visual C++ centric. This change is benefitial for everyone, and it’s something that I’ve been doing for a long time ago when posting stuff around interop between managed and native code, as well as different technologies. This is my eleventh consecutive time as an MVP, and during this time I’ve been able to see how Microsoft and their technologies have change for better I would say. Their approach towards FOSS and cross-platform have made them a better and open organization than ever before.

MVP 2016


These days I spend my days mostly working in architecture design, building frameworks and solutions for customers and R&D which I find it very appealing because I can do cool stuff like building native code that leverages AWS C++ SDK with Visual StudioCLion or Qt Creator using C++ that I can deploy to my WindowsLinux or Raspberry Pi without any issues and that’s one of C++’s main strengths, portability and efficiency.
None of my successes as professional or individual could have been possible without God and his son Jesus, my family (my wife and two daughters), Microsoft and you, because you all keep my motivation to learn and keep myself improving everyday.

Thank-you very much all.


Integration between Qt application running on Linux and Microsoft SQL Server

I love computers, technology and software that’s for sure. I fell in love with them at a very early age and I still got the passion Smile

To me, when I’m able to make one system talk or communicate and interact with another it’s always fun and challenging, but more importantly it’s rewarding as.

I have always been a Microsoft dude, but at the same time I have always been supporter of FOSS and everything it’s got to offer. On a separate note but in the same context (kind of) I am passionate about standards, patterns and languages.

Talking of languages… People always come to me asking stuff about C++ and their misconceptions on the language. C++ allows me to do what any other language can’t… that simple. I am afraid that they way it’s be taught at universities differs big time from modern C++ features and best practices, but I’m not the best person to make that judgment and that’s not the intent of this post anyways.

From now onwards, I will start publishing articles on Qt, FOSS and how to integrate them into the Microsoft Eco-System. Today, I will start describing how to query a SQL Server Database from Ubuntu Linux via a Qt application.

The beauty of standards is that no matter who the consumer is, the expected outcome will always be the same (e.g.: Surfing the web from any browser regardless of the operating system, for instance).

Qt natively provides support for data aware applications, therefore developers can greatly benefit from this by implementing MVC based applications. MVC support was introduced in Qt 4. In this post, however I will not discuss any MVC related topic but a simple application running on Ubuntu that connects and pulls data from SQL Server. In order to do so, some artifacts and configuration steps are required in Linux.


In linux we will require to have UnixODBC driver, FreeTDS libraries and for debugging purposes Wireshark to check whether our requests are being sent over the network. We must configure wireshark in order to enable listening on network cards.

angel@ubuntu:~$ sudo apt-get install wireshark

angel@ubuntu:~$ sudo groupadd wireshark

angel@ubuntu:~$ sudo usermod -a -G wireshark $USER

angel@ubuntu:~$ sudo chgrp wireshark /usr/bin/dumpcap

angel@ubuntu:~$ sudo setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap

angel@ubuntu:~$ sudo apt-get -y install freetds-bin tdsodbc unixodbc





Once we have all the pre-requisites installed, we have to make changes to odbc.ini, odbcinst.ini and freetds.conf. These changes are shown below










Qt Creator uses the odbc.ini and odbcinst.ini files that are not the ones we have made changes to, so we either create a symbolic link to them or just copy them to the expected folder.

angel@ubuntu:~$ sudo cp /etc/odbcinst.ini /usr/local/etc/odbcinst.ini

angel@ubuntu:~$ sudo cp /etc/odbc.ini /usr/local/etc/odbc.ini


In order to test that our Linux environment has been properly configured and we can effectively connect and query SQL Server, we can use tsql utility that’s part of FreeTDS. It’s important to specify UNICODE (UTF-8) as the client charset in freetds.conf otherwise the client won’t be able to understand some of the data returned by SQL Server.



The issue with UNICODE not being properly configured comes up in Wireshark as Unknown Packet Type.




Back to my introduction on standards, TDS is an application layer protocol that was initially designed and developed by Sybase, but later on it’s been enhanced and maintained by Microsoft for their SQL Server. The protocol definition can be found here


We have described the prerequisite and configuration steps, now let’s move on to the sample application. As mentioned earlier, we will not make this a MVC app where we can bind the model directly to the view (native Qt widget) but we’ll pull data based on a query and then we populate a DTO that’s a vector (collection) of EmployeeDto class.



std::vector<EmployeeDto> EmployeeDal::GetPeople() {

    std::vector<EmployeeDto> retval;

    auto db = Database_get();


    if ( {

        QString queryStr("Select  distinct top 100  'ID'=A.BusinessEntityID, 'FirstName'=A.FirstName, 'LastName'=A.LastName ");

        queryStr.append("From  [AdventureWorks2012].[Person].[Person] A ");

        queryStr.append("Order by LastName");

        QSqlQueryModel query;

        query.setQuery(queryStr, db);


        auto count = query.rowCount();


        for (auto r = 0; r < (count > 0 ? count : RowCount); r++) {

            auto record = query.record(r);


            if (!record.isEmpty()) {

                EmployeeDto newRecord;










    return retval;


The way Qt handles events might resemble to .NET, but they’re quite different. Qt has the concept of Signals and Slots which makes it easy for developers to raise events to notify consumers that something has occurred and needs to be handled. The code snippet below calls our DAL which in turn pulls data from SQL Server. Qt uses any C++ toolchain available, and when running in Windows it uses MSVC. In my Windows development machine, I have Visual Studio, Qt Creator and CLION but I only have Qt Creator and CLION in my Linux environment, and once again the beauty of standards is that I can compile a project in Windows and the same project can also be compiled in a different environment. In Linux I mainly use the C++ compiler in GCC thus I can have C++ code with new features (e.g.: Lambdas) and benefit from whichever environment I’m using.

void MainWindow::on_btnRunQuery_clicked() {

    auto index = 0;

    EmployeeDal dalObj;

    auto results = dalObj.GetPeople();

    auto lstView = findChild<QTableWidget*>("lstQueryResult");


    if (results.size() > 0 && lstView != nullptr) {





        std::for_each(results.begin(), results.end(), [&](EmployeeDto& employee) {

            lstView->setItem(index, 0, new QTableWidgetItem(QString::number(employee.Id_get())));

            lstView->setItem(index, 1, new QTableWidgetItem(employee.FirstName_get()));

            lstView->setItem(index, 2, new QTableWidgetItem(employee.LastName_get()));






void MainWindow::on_btnClose_clicked() {

    auto code =  [&]() {close();};

    Messenger("Are you sure you want to quit?", reinterpret_cast<const QMainWindow*>(this), code);



Lambda expressions in C++ allow us to pass a functor as a parameter or we can also use  the function class template which is a general-purpose polymorphic function wrapper (This would be the equivalent of Action or Func in C#)

void MainWindow::SomeFunction(std::function<bool(void)>& ptr) {

    if (ptr != nullptr)




void MainWindow::on_Something_triggered() {

    std::function<bool(void)> ptr = std::bind(&MainWindow::DoSomething, this);



In our example I have a template function called “Messenger” which displays a Messagebox that executes a lambda passed as parameter.

#include <QMessageBox>


template<class T>

void Messenger(const QString& text, const QMainWindow* window, T&& functor) {

    QMessageBox msgBox(window->parentWidget());


    msgBox.setStandardButtons(QMessageBox::Yes | QMessageBox::No);



    if (msgBox.exec() == QMessageBox::Yes)




The images depicted below correspond to the application running and the MessageBox displayed by invoking our template function.

image image


If we start a trace in SQL Server Profiler we can see the request that has been received from the application




Sample demo source code here





Whatis Utility for Windows

Hi Community,

I’ve just come back to the office after rolling off from an engagement, so this week while being back to the bench I’ve been practicing  and studying some Qt for my upcoming certification, doing a few things around Raspberry Pi + Kinect + OpenKinect on Raspbian but at the same time I also built an utility that’s been missing in Windows, however available in any other *nix operating system and I’m referring to whatis command which I find very useful.

whatis displays information stored in manual pages description, so I thought that I needed to provide similar functionality, hence I chose to use ESENT that’s built-in to Windows. ESENT is a NoSQL ISAM database primarily for C/C++ developers that can be also used from managed code (ESENT Managed Interop on codeplex, for example).

My version of whatis stores information about any binary (DLL or EXE) that contains a VERSIONINFO resource in an ESENT database. This information is stored the first time the utility is called to query a file, and then subsequent calls retrieve it from the database. The solution contains a few classes being the ESENTCore the most important one because it acts as DAL to interact with ESENT. If the database is deleted, do not worry because it will get created again.

I thought it’d be a great idea to dissect the solution by firstly describing the method that creates the table used by whatis

vector<ColumnInfo> EsentCore::GetColumnDefinition() {

    vector<ColumnInfo> retval;


    retval.push_back(ColumnInfo{0, wstring(Pk_Id_Column), JET_COLUMNDEF{

        sizeof(JET_COLUMNDEF), NULL, JET_coltypLong, NULL, NULL, NULL, NULL,

        GetLengthInBytes(4), JET_bitColumnAutoincrement}});


    retval.push_back(ColumnInfo{0, wstring(Name_Column), JET_COLUMNDEF{

        sizeof(JET_COLUMNDEF), NULL, JET_coltypText, NULL, NULL, NULL, NULL,

        GetLengthInBytes(50), JET_bitColumnFixed | JET_bitColumnNotNULL}});


    retval.push_back(ColumnInfo{0, wstring(Location_Column), JET_COLUMNDEF{

        sizeof(JET_COLUMNDEF), NULL, JET_coltypText, NULL, NULL, NULL, NULL,

        GetLengthInBytes(MAX_PATH), JET_bitColumnFixed | JET_bitColumnNotNULL}});


    retval.push_back(ColumnInfo{0, wstring(Description_Column), JET_COLUMNDEF{

        sizeof(JET_COLUMNDEF), NULL, JET_coltypLongText , NULL, NULL, NULL, NULL,

        GetLengthInBytes(MAX_PATH * 5), JET_bitColumnMaybeNull}});


    return retval;


It is a simple structure (Primary key that’s auto numeric, file name, file location and file description). Data operations with ESENT have to be performed in the context of a transaction,  and the database must be prepared for these operations to occur, a good example can be seen in the InsertRecord method shown below

bool EsentCore::InsertRecord(const vector<ColumnType>& columns) {

    auto colIndex = 0;

    auto retval = false;

    auto columnInfo = GetColumnIds();

    auto newColumns = make_unique<JET_SETCOLUMN[]>(modifiableColumnCount);


    if (columns.size() > 0) {

        if (SUCCEEDED(JetBeginTransaction(m_sessionId)) && SUCCEEDED(JetPrepareUpdate(m_sessionId, m_tableId, JET_prepInsert))) {

            ZeroMemory(newColumns.get(), sizeof(JET_SETCOLUMN) * modifiableColumnCount);

            for_each(columns.begin(), columns.end(), [&](const ColumnType& column) {

                if (column.ColumnName != Pk_Id_Column && column.ColumnName != CreatedOn_Column) {

                    auto colInfo =;

                    newColumns[colIndex] = JET_SETCOLUMN{0};

                    newColumns[colIndex].columnid = colInfo.columnid;

                    auto ptrData = make_unique<char[]>(MAX_PATH * 5);


                    if (colInfo.coltyp == JET_coltypText || colInfo.coltyp == JET_coltypLongText) {

                        auto wstr = reinterpret_cast<wchar_t*>(const_cast<void*>(column.ColumnData.pvData));

                        auto size = wcslen(wstr);

                        wcstombs(ptrData.get(), wstr, size);

                        newColumns[colIndex].pvData = malloc(size);

                        memcpy(const_cast<void*>(newColumns[colIndex].pvData), ptrData.get(), size);

                        newColumns[colIndex].cbData = size;



                    newColumns[colIndex].err = JET_errSuccess;





            retval = SUCCEEDED(JetSetColumns(m_sessionId, m_tableId, newColumns.get(), modifiableColumnCount)) &&

                SUCCEEDED(JetUpdate(m_sessionId, m_tableId, nullptr, NULL, NULL));


            // Free memory

            for (auto nIndex = 0; nIndex < colIndex; nIndex++)



            // Commit or rollback transaction

            if (retval)

                JetCommitTransaction(m_sessionId, NULL);

            else  JetRollback(m_sessionId, NULL);




    return retval;


I’m a generic programming aficionado that’s why I enjoy using templates in C++ or generics in .NET and they’re not the same in case you’re wondering as I had mentioned here. Speaking of which, there’s a template function (similar to a generic method in .NET) that based on the input parameter it behaves differently yet it returns the same data type

template <typename T>

WhatIsRecord EsentCore::GetRecord(const T* cols) {

    WhatIsRecord retval;

    vector<void*> values;

    JET_RETRIEVECOLUMN* colValues = nullptr;

    vector<ColumnType>* colDef = nullptr;

    strstream name, location, description;

    auto colType = typeid(T) == typeid(JET_RETRIEVECOLUMN);


    if (colType)

        colValues = (JET_RETRIEVECOLUMN*)cols;

    else {

        colDef = (vector<ColumnType>*) cols;


        for (auto index = 0; index < 3; index++) {

            auto colData = colDef->at(index + 1).ColumnData;

            auto ptrData = make_unique<char[]>(MAX_PATH * 5);

            auto wstr = reinterpret_cast<wchar_t*>(const_cast<void*>(colData.pvData));

            auto size = wcslen(wstr);

            wcstombs(ptrData.get(), wstr, size);


            ZeroMemory(, colData.cbData);

            memcpy(, ptrData.get(), size);




    name << reinterpret_cast<char*>(const_cast<void*>(colType ? colValues[1].pvData : << endl;

    retval.Name = Trim(name);

    location << reinterpret_cast<char*>(const_cast<void*>(colType ? colValues[2].pvData : << endl;

    retval.Location = Trim(location);

    description << reinterpret_cast<char*>(const_cast<void*>(colType ? colValues[3].pvData : << endl;

    retval.Description = Trim(description);


    for_each(values.begin(), values.end(), [&](void* block) {free(block); });


    return retval;


The method mentioned above uses the typeid operator that’s one of the mechanisms in the language responsible to provide RTTI. This is the closest we can get to typeof operator in .NET. 

Strings are one of the trickiest things to deal with in native code, that’s something most managed code developers take for granted and they can easily use and handle, but fear not, in C++  we have STL’s strings and also streams to make our lives easier as shown in my implementation of Trim method below

string EsentCore::Trim(strstream& text) {

    string retval;

    string newString(text.str());


    for (auto index = 0; isprint((unsigned char) newString[index]); index++) 

        retval.append(1, newString[index]);


    retval.erase(retval.find_last_not_of(' ') + 1);


    return retval;


I’ve also created a project on codeplex – in case you want to use the utility or extend it… and yes, I know it’s got a very funny  URL Smile 

There’s one tool I use every time I have to work with an  ESENT database. It’s free, awesome and lightweight and that’s ESEDatabaseView by NirSoft. The image depicted below show the objects in my whatis database


as well as some of the contents of WhatIsCatalogue (Table used by whatis) with information loaded by running a PowerShell script described next

The following script retrieves all the executable images in System32, loops through the collection and with every iteration it adds the file information to whatis.db

$files = Get-ChildItem "C:\Windows\System32" -Filter *.exe


for ($i=0; $i -lt $files.Count; $i++) {

    $exe = "c:\windows\system32\whatis.exe"

    &;$exe  $files[$i].FullName



As mentioned earlier, images below correspond to whatis running on my three different development environments (Ubuntu LinuxOS X El Capitan and our custom version of the utility running on Windows 10




My ramblings on computers and architecture