Kendo UI for Angular: Customizing the order of Grouping in the Grid

We recently had a scenario where we wanted our data grouped in a Grid. However, we needed a special order applied to the grouping. Alphabetical did not work for us based on a business requirement. In addition, the dataset we were working with was guaranteed to be limited to less than 30 records so paging would be turned off.

For our example, we will be working with a list of products. Here is a sample model used in the list:

{
    ProductID: 1,
    ProductName: 'Chai Tea',
    SupplierID: 1,
    CategoryID: 1,
    QuantityPerUnit: '10 boxes x 20 bags',
    UnitPrice: 1.49,
    UnitsInStock: 39,
    UnitsOnOrder: 0,
    ReorderLevel: 10,
    Discontinued: false,
    Category: {
      CategoryID: 901,
      CategoryName: 'Teas',
      Description: 'Teas',
      Order: 1,
    },
  },

Out of the box, Kendo UI will allow you to easily group a dataset in a Kendo UI Grid. You can also sort it alphabetically ascending or descending. Here is a sample of the Angular code.

export class AppComponent implements OnInit {
  public groups: GroupDescriptor[] = [{ field: 'Category.CategoryName' }];

  public gridView: DataResult;

  public ngOnInit(): void {
    this.loadProducts();
  }

  private loadProducts(): void {
    this.gridView = process(products, { group: this.groups });
  }
}

Here is the Kendo Template. Notice setting groupable to false prevents the user from changing the grouping.

<kendo-grid
          [groupable]="false"
          [data]="gridView"
          [height]="400"
          [group]="groups"
        >
        <kendo-grid-column field="ProductID" title="ID" [width]="80"></kendo-grid-column>
        <kendo-grid-column field="ProductName" title="Name" [width]="300"></kendo-grid-column>
        <kendo-grid-column field="UnitPrice" title="Unit Price" [width]="120"></kendo-grid-column>
        <kendo-grid-column field="Category.CategoryName" title="Category">
            <ng-template kendoGridGroupHeaderTemplate let-value="value">
                {{value}}
            </ng-template>
        </kendo-grid-column>
</kendo-grid>

The above code groups the data for the Grid to display as such:

Notice that the order of the groups is alphabetical. Sorry, adding some ads here to help pay for my WordPress annual subscription. Keep scrolling to the next section to see how to tweak the code.

Advertisements
Advertisements

How to change the order

The SME really wanted the “Teas” group to show first, then “Coffee” and finally “Soft Drinks”. You’re probably asking “Really???” But it really did make sense for the business requirements to order the data in such a way. We also really needed it in a Kendo UI Grid since there were many other features we needed for this feature.

You can easily change the grouping to use the “Order” property in the Category Model.

public groups: GroupDescriptor[] = [{ field: 'Category.Order' }];

But this will display the following which is not user friendly. The Group Descriptor is using the field as the value to display. But we want the value to be the Category Name.

So now we just need to tweak this a bit to get it to work as required. “loadProducts” needs to be changed to update the value to display. The call to “process” will order the data based on the grouping. Next we need loop through the view data, get the Category Name and put it in the group value which is then displayed when bound.

private loadProducts(): void {
    //this.gridView = process(products, { group: this.groups });
    var view = process(products, { group: this.groups });

    view.data.forEach(x => { x.value = x.items[0].Category.CategoryName });

    this.gridView = view;
  }

We need to tweak the Group column in the template so the display is user friendly. The field will be bound to the Category.Order which is the same field we are ordering by.

<kendo-grid-column field="Category.Order" title="Category"> 
      <ng-template kendoGridGroupHeaderTemplate let-value="value"> {{value}}          
      </ng-template> 
</kendo-grid-column>

Now take a look at the grid. The grid is ordered using the Order field but the display shows Category Name. Just how the SMEs wanted it to work! Turns out what seemed like a challenging requirement was pretty simple to implement. Thanks Kendo UI! You saved the day again.

When you populate the “Order” property in your data, you can easily tweak the logic to what your business needs are. For instance, you can order by the number of items per group.

Note: The jQuery version of Kendo UI does allow for a “compare” function on the sort. See

It would be nice if Telerik would be consistent between the two versions of the framework.

Conclusion

Telerik’s Kendo UI components are rather powerful tools. They can easily speed up development once you get the hang of them.

Here is a working sample of the code:

References:

Building CI/CD Pipelines with Azure Data Factory: Part 3

In Part 1, I delved into lessons learned, creating the Data Factory Resources and configuring Source Control. Here is a link to Part 1.

In Part 2, I covered setting up sample resources, creating your Data Factory Pipeline in a Dev Environment and Publishing it. Here is a link to Part 2.

Now it’s time to create our Release Pipeline in Azure DevOps. This will be the CI/CD Pipelines to easily deploy out Azure Data Factories across our environments!

Note: The following resources should of already been created for your QA environment:

  • Data Factory
  • Key Vault
  • Storage Account

Configure Containers in QA Storage Account

Your storage account will need to mimic your DEV Storage Account. Make sure it is identical. For this sample I had to do the following in the QA Storage Account:

  • Created the “copyto” and “copyfrom” containers
  • Uploaded the products.csv file to the “copyfrom” container.

Add the Secret to your QA Key Vault

You need to setup your connection string in your QA Key Vault. First, grab your connection string from your QA Storage Account. This is under the “Access Keys” menu.

Just in case you are new to Storage Account, take a look at the “Rotate Key” button. Also note that there are two keys: key1 and key2. Why? You can use either connection string for key1 or key2. This allows you to constantly rotate your keys as needed for security purposes. For instance, you can have a two week rotation cycle:

  • Week 1 ->
    • rotate key1
    • update Key Vault with the new connection string for key1
    • wait pre-determined time for key to be propagated to all dependent resources
    • rotate key2 since resources are no longer using it
  • Week 2 ->
    • rotate key2
    • update Key Vault with the new connection string for key2
    • wait pre-determined time for key to be propagated to all dependent resources
    • rotate key1 since resources are no longer using it
  • Repeat

If a key is compromised, it is only a matter of time before it no longer works. This rotation also allows you to quickly change the keys if your system is hacked. This entire process can be scripted and automated using PowerShell.

Now, let’s add the connection string to your QA Key Vault:

The name of the key must be identical to the name in Dev.

Once you have created the key, then you are ready to move on to adding the access policy.

Add Access Policy in Key Vault

Your QA Data Factory needs to have access to your Key Vault. In Part 2, we set this up for Dev. Now, we need to do it for QA. Start by clicking the “Add Access Policy” button.

This is a repeat of what we did before. I am only setting the access to allow my QA Data Factory to “List” secrets and “Get” secrets. It does not need any other access. Next, select the principal that needs access. This principal is your QA Data Factory. Then click “Add”

Finally, click “Save” which will save the new access policy for your Key Vault.

Data Factory Pipeline Triggers

In most real-world applications, your Data Factory Pipelines will be running based on scheduled triggers. When deploying, you need to make sure the triggers are stopped before deployment and started after deployment. Luckily we don’t have to write that logic. Microsoft already did. You just need to add a PowerShell script to your “adf_publish” branch in your repo. See the following link:

This script actually does a lot more than stopping and starting triggers. Directly from the link above:

The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task the latest Azure PowerShell version.

Now add the script to your “adf_publish” repo.

Finally we are ready to create our CI/CD Pipeline.

Create your CI/CD Pipeline

Now let’s go setup Azure DevOps to deploy. Start by navigating to “Pipelines” in your project in Azure DevOps. Then head over to “Releases” and click “New Pipeline”.

We need an empty job. This will allow us to configure the pipeline as needed. So click “Empty Job”.

We will need to configure the stage. I kept the defaults since this is just a sample so I just closed the dialog.

Now add the artifact to the stage. We are deploying what was published by the DEV Data Factory. Remember, in DEV, when the Data Factory publishes the changes, it commits the changes to the “adf_publish” branch. We are not deploying/publishing an actual build like you would when deploying an App Service. Instead, we are just deploying ARM templates that are in the Repo branch so we need to pick the Azure Repo Project and Source. Then select the “Default Branch” which will be “adf_publish”. Finally, save the artifact.

Now the artifect is configured so it is time to setup the actual job. Click the “1 Job, 0 Task” link.

Now let’s add three tasks to this job. Here are the following tasks we will create:

  • First Task: Azure PowerShell Script (this will stop the triggers)
  • Second Task: ARM Template Deployment (this will deploy the ARM templates)
  • Third Task: Azure PowerShell Script (this will start the triggers)

First Task

You will need to click the “+” to search for the task to add.

This task of this job will be configured to stop the triggers. You’ll need to add an “Azure PowerShell” task.

Once you have added the task, you will see it added under your job.

Now click on the task and edit it. I changed the “Display Name” to include “Pre-Deployement”. Then you’ll need to select your Azure Subscription. The next step is picking the location of the PowerShell script that you added to your repo. You can easily do this by clicking the eclipse.

After that you need to fill in the script arguments. When running it for pre-deployment, the syntax for the arguments is the following:

-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $true -deleteDeployment $false

You ARM template location is in the “adf_publish” branch in your repo. You need the path including ARMTemplateForFactory.json file.

Here is the configuration. Also note, that I selected the latest version for the PowerShell version.

Now, save the job and let’s add the next task.

Second Task

We’ll need to add an “ARM template deployment” task.

After clicking add, configure the “Azure Details including the subscription, resource group and location. The action should be set to the default which is “Create or update resource group”.

Now scroll to the template section. Using the eclipse, select the the location of the ARM Template and the Template Parameters. Next click the eclipse for the “Override Parameters”. See the next screen shot.

Here is where the secret sauce happens. This is where everything comes together. You will need to adjust the parameters using the Override Parameters dialog. The factory name will be the resource name of your Data Factory in QA. The Key Vault will be the URL for the QA Key Vault. When it deploys, the ARM template will be deployed to the resource you set for factoryName and point it to the Key Vault you configured. Pretty sweet!

Finally, “Save” the task.

Third Task

This is really just a clone of the first task along with changing the boolean parameters. First, let’s clone the task.

Then move the ARM Template task in between the “Pre-Deployment” and the “Pre-Deployment copy” as below.

Now update the name to be “Post Deployment”. Then change the -predeployment flag to $false and the -deleteDeployment flag to $true.

Save! Your three tasks should look like this:

We are finally ready to deploy to QA!

Deploy

Let’s deploy it. First, I opened up the Azure Data Factory Studio. As you can see there are no pipelines, datasets, etc…

Then I close that tab. Now let’s go deploy! We need to create a release.

You will need to select the “Stage” and the “Version” of the “Artifact”. Basically this git hash of what you want to deploy.

Once you click “Create”, it will queue the release.

Then I go into the “Release” and click “Deploy”.

After a few minutes, your deployment will complete if you did everything properly.

Now open Azure Data Factory Studio (if already open, refresh it) for your QA Data Factory. Notice that your pipeline and datasets were deployed.

Then go check out your Key Vault. Notice that it points to your QA environment so the parameters were properly deployed.

Pretty exciting! We are finally ready to test the Data Factory Pipeline.

Run your QA Data Factory Pipeline

Notice that nothing is in the “copyto” folder in the QA Storage Account.

Back in Azure Data Factory Studio, open up the pipeline and go to your pipeline. Then click “Trigger Now”.

Most pipelines will require parameters however our pipeline does not. It’s just a simple sample. In order to run it, just click “Ok”.

You can verify it ran correctly in the “Pipeline Runs” under “Monitor”. Notice that the pipeline ran just fine.

But the proof is seeing that the file now exists in the “copyto” folder in the QA Storage Account. Nothing like the sweet smell of success!!!

Finally we have built out a DEV and QA environment with CI/CD for your data factory. You’ll have to do that again for your production environment but that should be a lot simpler now that you have QA working.

Conclusion

I really hope you have found this three part series really helpful for setting up your CI/CD Release Pipelines for your Azure Data Factory. I know this has been a lot to read through and setup up.

References

Here is a list of references:

Building CI/CD Pipelines with Azure Data Factory: Part 2

In Part 1, I delved into lessons learned, creating the Data Factory Resources and configuring Source Control.

Part 2 covers setting up sample resources, creating your Data Factory Pipeline in a Dev Environment and Publishing it.

Our Sample

Our Data Factory pipeline is going to be pretty simple. We’re just going to move a file from one location to another location in a storage account. Simple and easy sample but this article is more about DevOps then some really cool Data Factory.

Preparing for CI/CD

You need to think through all your resources your data factory will need access to like connection strings, storage accounts, etc. You could create them as parameters and replace them in the CI/CD pipelines. Not a bad idea but if your secrets change, you’ll have to redeploy.

I have seen a sample of Azure DevOp pipelines that take CSV files of secrets for a Data Factory. It uses that file to replace strings in the ARM Templates. I really hate that idea. I can’t even believe someone is suggesting that. In my opinion, using that solution is storing connection strings and secrets in places you don’t really want them. Next thing you know those CSV files are check in to the repo and then everyone on your team has passwords to prod.

Instead, I use KeyVault. It is much simpler approach. It is rather easy to tell CI/CD which key vault to use instead of storing parameters in a CI/CD Pipeline. It’ll simplify the deployment process and when a connection string or secret changes then you can easily update it in the appropriate key vault. Next time the pipeline is triggered, it’ll get the new secret from Key Vault. Easy to manage!

Setup a Storage Account

So let’s go ahead and setup a storage account. I am not going to go into a lot of details on this. Since this is a sample, we’ll keep it simple. I picked the redundancy to be “Locally-redundant storage” which is the cheapest. Then I clicked through the rest of the steps accepting all the defaults.

For this sample I created a DEV and QA storage account.

Setup a Key Vault

Again I am keeping this pretty simple. I’ll assume the reader knows how to create a Key Vault. I pretty much created the Key Vault with the default config.

For this sample I created a DEV and QA Key Vault. Once you have the Key Vault resource setup for each environment, then configure the secrets (using the same Secret Name) in each Key Vault making sure the secret is configured appropriately for that environment.

We are ready to setup our data factory. The first step is configuring the “Linked Services”

Configure “Linked Services” in ADF

Let’s setup a pipeline in our DEV data factory and publish it. As previously stated, we are not going to do anything exciting. We’re just going to create a pipeline that copies a CSV file from one location to another. I know you’re seriously disappointed but this blog is already long enough.

First let’s add our “Linked Services”. We want to add a linked service for our Dev Key Vault resource. You’ll need to go to “Linked Services” under “Manage” on the left menu.

Next click “+ New” and search for “Key Vault”. Click the icon for “Azure Key Vault” in the search result.

This will open the settings that you will need to configure for your Azure Key Vault. Name is just the name of the linked service. I leave the “Authentication Method” set to “Managed Identity”. Then I select “Enter Manually” for the “Azure key vault selection method”. This “Base URL” is a parameter you setup below. Under parameters, I setup the default value for the parameter. The Azure DevOps pipeline will pass the Key Vault in as this parameter. I’ll show you this in Part 3. But don’t save yet! There is still one critical piece you need to do so read the next section. We’ll come back and save this in a minute.

Now you need to open a new tab and head over to your Dev Key Vault to add an access policy to your Dev ADF. This is necessary for your Dev Data Factory to have access to your secrets. In your Key Vault, click “Access policies”.

The next step is adding an access policy. Click “Add Access Policy”. I only give Data Factory access to get and list secrets. Your scenario may vary but for this sample we are building, we only need to get the secret. Regarding the principal on the left, search for your Data Factory resource, click it and then click the “Select” button. Now you are ready to click the “Add” button on the left.

Wait a second! There is yet another button to click. You MUST click the “Save” button. I forget to do this all the time. The previous page said “Add”. That means add and save it to me. But not here! You still need to click that “Save” button.

Now go back to your Data Factory and test your connection. It should light up green. If so, then click “Create” button.

Let’s add a Linked Service for the Storage Account. This will be a little easier to create based on how the connection string is being handled. In this sample we are linking this to the “Secret” we created in the Key Vault. This is not the most secure way to connect to a storage account but for purpose of the sample it works well. We are linking this to the Azure Key Vault “Linked Service” we just created.

Test your connection. It should light up green. If so, then click “Create” button.

Our linked services are ready! They should look similar to the screen shot below.

Create the Data Factory Pipeline

Let’s start by creating a simple pipeline using the “Copy Data Tool”.

First you have to setup the Source. Notice how we select the linked service for the Storage Account. You’ll need to configure the “File or Folder” location to copy from.

Next we have to pick the Target. We’ll use the same storage account. In addition, we will enter in the “Folder path” to copy too.

The next step requires setting up the file format for the Target.

Next is the settings where we set the Task Name.

Now review the summary…

Then click “Next” which kicks of creating the copy pipeline.

Once the deployment is complete, the pipeline is created. You can click the “Edit Pipeline” button to view the new pipeline.

Here is the critical piece which is publishing the pipeline. This commits the pipeline to source control in the publish branch. This branch is used by Azure DevOps for deploying to the QA and Prod environments.

After clicking “OK”, the pipeline and datasets are published along with the linked services.

Check out your repo and you’ll see the commit for the publish you just completed.

Now let’s trigger the pipeline to run which will copy the file.

After running the trigger the file is copied into the “copyto” folder. So amazing, right! Ok, still just a quick sample…

Conclusion

Now we have a working pipeline in Dev. Part 3 will demonstrate how to setup Azure DevOps and deploy to your QA environment.