11/06/2025

How to find usage information on Github Copilot


Most of you already know, Github copilot is very nice addition to VS code and Visual Studio IDE. Past couple of months, it has been very good coding assistant to me in all coding projects, specially in Visual Studio 2022.

I was using Free plan for Github copilot ever since I have started using it. Limits on free plan was enough for me to work on project I worked in past. However, last couple of week development work has increased, there fore I was wondering whether I'm hitting my free plan limits on Github copilot.

When you hover our Copilot icon on Visual Studio 2022, it give following options:


If you click on the "Copilot Free Status" menu, you get something like below:


It just says, when the free limits will reset (monthly). So how to find out how much you have already used.

This is when ChatGPT with search tool came in handy. Following procedure describe how to find the free limit. It is not very user friendly, but for developers, this is not a complicated task.

Step 1: Login to your Github account and go to following URL (preferably on Chrome or Edge): 

https://github.com/settings/copilot


Step 2: Open Developer Tools

  • Chrome/Edge: Press Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (macOS)

  • Firefox: Ctrl+Shift+K (Windows/Linux) or Cmd+Option+K (macOS)


Step 3: 
Switch to the Network Tab

In Developer Tools, click on the Network tab and ensure “Preserve log” is enabled to keep track of activity when the page reloads.


Step 4: Reload the Page

Hit F5 or reload the browser page. This captures all network requests, including the entitlement API call.



Step 5: Filter Requests

In the Network filter bar, type entitlement to locate the key request:



Step 6: Inspect the Payload
  • Click the request to open the Headers / Response pane.

  • Go to the Response tab — it should display a JSON object detailing your usage quotas and how much remains.


As you can see in above screen shot, json response show the remaining entitlement. To be clear in above example, account has not used any of his/her entitlement.







31/05/2025

Introduction to SQL Server Transactions (Transaction Isolation Part 1)

What is a Transaction


First, let's take a look at what is a Transaction in SQL Server?

Imagine you need to perform several related database operations that must either all succeed or all fail together. For example, transferring money from one bank account to another. This involves two main steps:

1. Deduct the amount from the source account.

2. Add the amount to the destination account.

If step 1 succeeds but step 2 fails (maybe due to a network error or server crash), you'd end up with money disappearing from the source account but not appearing in the destination account! That's a disaster.

A SQL Server transaction groups these multiple steps into a single, logical unit of work. You tell SQL Server "Start a transaction here (BEGIN TRAN)", perform all the steps, and then say "Okay, everything worked, make it permanent (COMMIT TRAN)". If something goes wrong during the steps, you say "Cancel everything since I started (ROLLBACK TRAN)", and SQL Server undoes any changes made within that transaction.

You might already hear about ACID properties of a transaction. ACID properties are the 4 pillars of a transaction, which allows SQL Server (and other relational databases) to implement the feature correctly.

So what are these ACID properties. ACID stands for:

  • A - Atomicity
  • C - Consistency
  • I - Isolation
  • D - Durability

Atomicity

Think of it: "All or Nothing."

What it means: An atomic transaction is treated as a single, indivisible unit. Either all the operations within the transaction complete successfully and are permanently recorded in the database, or none of them are. If any part of the transaction fails, the entire transaction is cancelled, and the database is returned to the state it was in before the transaction started. This is called rolling back the transaction.

Why it's important: Prevents partial updates that could leave your data in an inconsistent or incorrect state.

SQL Server: SQL Server uses the transaction log to keep track of all changes made within a transaction. If a transaction needs to be rolled back, SQL Server uses the log to undo those changes. If an error occurs during a transaction, SQL Server will often automatically initiate a rollback, or you can explicitly issue a ROLLBACK TRAN command.

Example (Bank Transfer):
○ You start the transfer transaction (BEGIN TRAN).
○ You successfully deduct $100 from Account A.
○ BUT before you can add $100 to Account B, the database server crashes.
Because of Atomicity, when the server restarts and recovers, it sees that the transaction wasn't completed (it wasn't COMMITted). It will use the transaction log to undo the deduction from Account A. Account A's balance will be back to what it was before the transaction. It's as if the transaction never happened.


Consistency

Think of it: "Valid State to Valid State."
What it means: A transaction brings the database from one valid state to another valid state. This doesn't mean the transaction logic itself is perfect (you could accidentally transfer the wrong amount!), but it ensures that the transaction adheres to all defined database rules and constraints. These rules include things like:
○ Constraints: PRIMARY KEY, FOREIGN KEY, UNIQUE, CHECK constraints (e.g., Balance must be >= 0).
○ Data Types: Ensuring you don't try to put text into a number column.
○ Triggers: Database logic that automatically runs on inserts, updates, or deletes.
If a transaction would violate any of these rules, it cannot be committed, and it will be rolled back.
Why it's important: Maintains the integrity of your data and enforces the structure and business rules defined within the database schema.
SQL Server: SQL Server automatically checks for constraint violations and executes triggers as part of the transaction. If a violation occurs, the transaction fails.
Example (Bank Transfer):
○ Let's say you have a CHECK constraint on the Balance column of your Accounts table that says Balance >= 0.
○ You try to transfer $200 from Account A, which only has $150.
○ You start the transaction (BEGIN TRAN).
○ You deduct 200 from Account - A. Account A′s balance temporarily becomes −50 within the transaction's scope.
○ When you try to COMMIT TRAN, SQL Server checks all constraints. It sees that Account A's balance is negative, which violates the CHECK constraint.
○ Because of Consistency (and the constraint), the COMMIT fails, and the transaction is automatically rolled back. Account A's balance returns to $150. The database remains in a valid state where no account has a negative balance.


Isolation

Think of it: "Transactions Don't Step on Each Other's Toes."
What it means: Multiple transactions running at the same time should not interfere with each other. From the perspective of one transaction, it should appear as if it is the only transaction running on the database. This prevents various concurrency problems (like one transaction reading data that another transaction is changing but hasn't committed yet, or two transactions trying to update the same data simultaneously in a conflicting way).
Why it's important: Allows multiple users or applications to access and modify the database concurrently without causing errors or returning incorrect results.
SQL Server: SQL Server provides different Isolation Levels (like READ COMMITTED, SNAPSHOT, etc.) which control how much isolation you get. Higher isolation levels provide stronger guarantees but can sometimes impact performance because SQL Server might need to use more locks or versioning to keep transactions separate. The default level in SQL Server is READ COMMITTED, which prevents reading data that another transaction has modified but not yet committed (known as "Dirty Reads").
Example (Bank Transfer):
○ Transaction 1 starts to transfer $100 from Account A to Account B. (BEGIN TRAN, deducts from A, prepares to add to B).
○ At the exact same time, Transaction 2 starts to read the balances of Account A and Account B to generate a report.
○ Because of Isolation, Transaction 2 will typically not see the temporary state where Account A has been debited but Account B hasn't been credited yet (especially with the default READ COMMITTED isolation level). Transaction 2 will likely see the balances as they were before Transaction 1 started, or it might wait until Transaction 1 is fully committed before reading. This ensures Transaction 2 gets a consistent view of the data, even though it's running concurrently with a transaction that's modifying that data.


Durability

Think of it: "Changes are Permanent, Even After a Crash."
What it means: Once a transaction has been successfully committed, its changes are permanent and will survive even if the database server crashes, loses power, or restarts immediately after the commit. The committed data is stored in a way that guarantees it won't be lost.
Why it's important: Ensures that once a user or application gets confirmation that a transaction is complete (e.g., "Your transfer is successful"), they can trust that the changes have truly been saved and won't disappear.
SQL Server: When you COMMIT TRAN, SQL Server ensures that the record of the transaction's changes is written to the transaction log on disk. Writing to the log is typically faster than writing the actual data pages to disk. Even if the server crashes after the log record is safely written but before the changes are written to the main data files, SQL Server can use the transaction log during startup recovery to redo the committed transaction and ensure the data files reflect the committed state.
Example (Bank Transfer):
○ You successfully complete the transfer: $100 is deducted from Account A, and $100 is added to Account B.
○ You issue COMMIT TRAN. SQL Server confirms the commit back to your application.
○ Immediately after receiving the confirmation, the power goes out, and the server shuts down.
○ Because of Durability, when the server is restarted, SQL Server performs a recovery process. It reads the transaction log, sees that your transfer transaction was committed, and ensures that the changes (deducting from A and adding to B) are fully applied to the actual data files on disk. When you next check the balances, they will correctly reflect the transfer.


Next


ACID properties are the backbone of reliable database systems like SQL Server

Out of all 4 of these properties, Isolation is the property we can configure mostly on SQL Server.

Atomicity and Durability is automatically implemented in the SQL server core code to make sure all transactions are adhere to them (otherwise no proper transaction). Although we define rules (business rules via constraints and triggers) for Consistency, enforcing them (once defined and enabled) is automatic (no intervention required from us).

However, Isolation property is much more configurable. This is because there is a trade-off between high Isolation and performances. Therefore, SQL server allows user to choose several pre-defined Isolation levels based on performance requirements.

So let's explore more on Isolation levels in our next module.


13/05/2025

Issue Opening OneDrive Vault - Solution

Recently I have an issue where my OneDrive "Vault" is not opening.

This is the error I was getting:


It says "We couldn't unlock your Personal Vault", "We encountered an unexpected error and could not unlock your Personal Vault. Please try again. You can also access you Personal Vault on OneDrive.com".

As per error message, I couldn't open (Unlock) my OneDrive Vault on desktop (windows desktop app). Rest of the OneDrive was working fine, I could sync rest of the folder through desktop app without any issue. Online access also fine, and vault was opening on Online app. So it is clear, something was not ok on desktop app.

Unfortunately there were no error log on Event viewer or any other usual places.

Googled for solution, but couldn't find anything directly relate to this issue. Most issues were relate to OneDrive desktop app not being able to sync.

There fore, I have tried few things mentioned on those articles.


Unlink and Re-Link

First thing I tried was "Unlink" account from desktop app and re-add it. You can do this in Settings of the app in Account section (see below). However, note this will re-sync all your files. So if you have lot of files in your OneDrive prepare for long sync time.


Un-install and Re-Install

Second option was to un-install the OneDrive desktop app and re-install it.

You can un-install the OneDrive desktop app using the usual way you un-install any other app in windows.

Go to Settings > Apps > Installed Apps and un-install the app. Once completely un-installed, re-start the machine and install it by downloading the setup file (here is a link to OneDrive releases).


Un-block Network

Then you got to try un-block your network connection, in case there are something blocking the authentication to "Vault". For example, try disabling firewall and test (make sure to re-enable it again if it didn't work, if it work you will have to find which port/application getting blocked by the firewall).

Another advice is to try disabling anti-virus software and test. Again make sure to re-enable it again.

Make sure proxies or vpn you connected are not blocking the OneDrive.

For me none of these worked. So my final option was to contact Microsoft Support (I'm a paid Office 365 customer, so I'm able to get support from them).

First few letters from Microsoft support was not very useful as I have already done basic things which I covered above.


Clear Credential Cache

Next what Microsoft has suggested is following:

  1. Press the Windows key  + R to open a "Run" dialog.
  2. Enter the path %localappdata%\Microsoft\OneDrive\settings and select OK.
  3. Delete the PreSignInSettingsConfig.json file.
  4. Restart the machine
They said this would clear the credential cache.
But for me it didn't really work.


Bypass the Issue

Ultimately they have acknowledge that the problem I'm facing is part of ongoing issue, which their Engineering team is trying to resolve.

Until fix is implemented they have suggested below steps:

  1. Exit the OneDrive desktop app.
  2. Open command line via start menu and run the command- REG DELETE HKCU\Software\Microsoft\OneDrive\Accounts\Personal /v VaultBackupGuid
    1. Its Ok if it returns Error: The system was unable to find the specified registry key or value.
  3. Open credentials manager via start menu.
  4. Select the Windows credential tab.
  5. Find the credential named “Microsoft OneDrive Generic Data- Personal vault VHD info” and click Remove.
  6. Restart OneDrive and then try to unlock the vault. 

This has actually fixed my issue.

Hope this will help someone going through the same issue.





01/05/2025

DBeaver Series - Part 2 - Connecting to Database

In my last blog post on DBeaver series - Part1, I was confused why my custom database was not shown on the DBeaver, even though I have connected to the localhost server.

After trying few things, I realized that connections in DBeaver is per database.

So here is how you create a connection to new database.

In DBeaver, in the tool bar, click on the new connection icon (or you can use File > New menu items to do the same).


You will be presented with new connection dialog box:


Select the type of the database you want to connect. I have selected PostgreSQL. Then click next.

Connection settings dialog box opens:


Note that this is a very complex connection setting dialog box with hundreds of configuration settings. However you only require to configure very few things for default connections.

Settings dialog has for 5 tabs.

  • Main
  • PostgreSQL
  • Driver Settings
  • SSH
  • SSL
We only consider "Main" tab in this blog, in future date we will discover what other settings are.

In "Main" tab, first thing you need to specify is "Server", i.e. which server you are going to connect to. You can do this via two way:
  • Host - specify host name and port as separate settings
  • Url - specify the connection as Url
Interestingly there is "Show All Databases" tick box in there, we'll tick this and see what will happen.

As host, you need to specify host name, database name and port number.

In Authentication section, we select Database Native mode. Other options are not considered in this blog post scope.

As authentication parameters, you need specify user name and password. You have the option to save the password.

Once you done with all you settings, click on the "Finish" button.

If all settings are correct you will be connected to the database:


Note that since we ticked "Show All Databases" box, connection shows all databases, but you can only connect to the database you specified.

09/04/2025

DBeaver Series - Part 1 - Installing and Connecting to Postgres

One of the common complain when you move to open source database is not having proper database management tool like SSMS for SQL server.

These days I'm trying few with Postgres databases. One of them is DBeaver.

DBeaver is a open source universal database management tool (as they call it). It can handle other databases such as MySQL, MariaDB and even commercial products like SQL Server and Oracle.

It has two versions, free community version and pro version. Pro version has lot of enterprise level features such as better security and AI related features. It also provide you with technical support. However for pro edition you have to pay fee like $25 per month or like $500 per yearly (if you choose Ultimate edition). 

You can compare editions here -> https://dbeaver.com/edition/

DBeaver is a desktop application, but it also have a web version and it is called CloudBeaver.

CloudBeaver is a web server which you need to install and configure using the source code they provide. However, demo can be found here -> https://demo.cloudbeaver.io/#/

DBeaver is currently on version 25.0.2.

DBeaver can be download from here -> https://dbeaver.io/download/

For my practices, I'm ok with free community version.

On the installation page, you have the option of choosing windows installer, zip or install using Microsoft Store. Though installation file is 121 Mb, I had tough time downloading it and it was showing over 1h or download time. So I have opted to go with Microsoft store one and seems much faster. It downloaded and installed within 2 minutes.

Once you installed, you are presented with UI like below:


First question it asked me was, do I want to create a simple database to explore features of the DBeaver. Why not? so I said yes.

When I press yes, not sure where or what type of database it created. I was presented with choose connection dialog box.


I have clicked on PostgreSQL and clicked "Next"

Kept all default value from next dialog box (I have already installed Postgres Database on this server with all default values):


Then clicked on "Finished"

After clicking on "Finished" I could see 2 database connections on the "Navigator" pane.


That's when I realized that, sample database was created using SQLite. When click on the "postgres" node first time, it has asked me to download driver files.


I have clicked on "Download" and it has started downloading and installing drivers for postgres. It took about 12 minutes to complete the driver download and installation.

After installation, I have presented with following error:


That's when I realized, I haven't input the password when creating the connection (with all defaults). So I edited the connection (right click -> Choose "Edit Connection"), and entered the password for the postgres server in the box shown below:



Then it was all ok. I can see my postgres database, but I cannot see the custom database I have already created on that server. Not quite sure why. I will have to investigate this.


Same as postgres node, if you click on SQLite node, it will also ask you download the required driver files (first time only).

I will continue to explore this and will blog about it if there anything interesting.

27/03/2025

Introduction to SQL Server Indexes

What are SQL Server Indexes

Indexes in SQL Server servers similar purpose as they do in a book we read.

In a large book, let say a book with 300 pages, which is on technology, we want to read pages written about "Databases". If we don't have an index, we would have to read the entire book to find the pages on "Databases". But if that book contains an index, we can simply read the index and find the page numbers on "Databases" and directly turn to them and read them.

Similar way, Indexes in SQL server provide a method to retrieve data much faster way and serves your query more quickly. For example consider a scenario where SQL server is looking for all the Employees starting with Letter "D" from the Employee table. If Employee table doesn't have an index like our book had, SQL server will have to read the entire table to find employees with name starting with letter "D". But if it had an index on Employee name, it could just read the index and jump straight to employees it require directly and present the result set quickly.

Index is a data structure which is linked with a table. In a way, you can call it small copy of the table, which has data arranged in a different way than the main table. You can have multiple indexes on a single table. Each of these indexes will arrange copy of the table data in a different way, so they can satisfy different different queries.


Type of Indexes

There are several types of indexes.

You can have tables without indexes. When a table doesn't have an index, it is called a "Heap". No order to the data and they are stored as they come.

Clustered Index

You can create a "Clustered" index to a table. When you create a clustered index for a table, table (or the heap) is re-arranged in the order of the index. So basically table become the clustered index, because data in table physically re-arranged to match the index. 

For example think we have an employee data stored in a table without an index (i.e. a heap). Then we create an clustered index on that table, let say on Employee Id column (EmpId).

CREATE CLUSTERED INDEX IX_EMPLOYEE_ID ON Employee(EmpId)

Then we get a table which is physically sorted on employee Id. This is a clustered index. You can only have one clustered index per table, because table data is physically structured on clustered index.
When you specify a Primary Key for a table, SQL server automatically creates an clustered index on that column(s).

Non-Clustered Index

Let say we have lot of queries that based on employee age. So we need to access age very frequently. For example we need employees who are younger than 25 years. But as it stands now, we need to read the entire table to find young employees.
In order to ease the pain SQL server having, we can create an index on Age column, but leave the physical structure of the table as it is. To do this we can use Non-clustered index.

CREATE NONCLUSTERED INDEX IX_EMPLOYEE_AGE ON Employee(Age)

This will create a another structure link to the table. It is kind of small copy of the table, but arrange in a different manner. Though we only specify "Age" column in the index, SQL server always put the primary key of the table on the index, so that it can relate to the row back in the original table.

You can have many non-clustered indexes as you like on a table. However, more indexes you have, more data SQL server will have to add/update/delete when you add/update/delete data from a table. Because SQL server have to update non-clustered indexes as well as the table data (if they are effected by update).

Unique Index

Unique index is a special type of index, it just tell SQL server, column specified cannot have duplicate values. You can have clustered and non-clustered unique indexes. By default primary key constraint create a unique clustered index and unique constraint create unique non-clustered index.

CREATE UNIQUE NONCLUSTERED INDEX IX_EMPLOYEE_EMAIL ON Employee(Email)

This tell query optimizer that data in this email column is unique (i.e no duplicate). This information is used when query optimizer come up with execution plans.

Filtered Index

Filtered index is another special type of non-clustered index, where we only store data frequently required from a column.
For example, if we have "Location" column on Employee table and most queries are based on Location = "UK" employees, we can create a non-clustered index filtered only for employees who are in UK.

CREATE NONCLUSTERED INDEX IX_EMPLOYEE_LOC_FILTERED ON Employee(Location) WHERE Location = 'UK'

Having filtered index is efficient because you don't need to store the entire value collection for that column in the index. This makes index smaller and also easier for optimizer to use.


Other type of Index

There are many other types of indexes in SQL Server. For example:
• Full Text Index
• XML Index
• Spatial Index
• Column Store Index
• Hash index (on in memory OLTP)
We are not going to talk about above in this module.


Index Storage and How they work

As mentioned before, we are mainly focused on row store indexes. Rowstore index are organized into data structure called B+ tree

As we learned in other modules, data in SQL server are organized into 8k pages.
Let's first look at the organisation of clustered index.
Each page in B+ tree structure is called Node.
Top node of the index is called "Root Node".
Very bottom nodes of the index is called "Leaf Nodes".
There can be multiple levels of nodes between "Root Nodes" and "Leaf Nodes" and they are called intermediate levels hence nodes in these levels are called "Intermediate Nodes".
Each of these nodes are doubly linked, which means we can go back and forward from one node to other in same level.

Clustered Index Example:

Let's consider an example index on following Employee table. Let's also assume EmpId is the primary key of the table. When you define a primary key on a table, SQL server automatically, create a CLUSTERED index on the table.

Or you can create a one like below:

CREATE CLUSTERED INDEX IX_EMPLOYEE_ID ON Employee(EmpId)


Let's assume in one 8k page we can only fit 5 records. So at the leaf level, nodes are arranged in following way.


Note because nodes are doubly linked (indicated by arrows), you can traverse (read) from one node to another both forward and backward.

Then we add intermediate level of nodes:

Note that each record in intermediate level is pointer to leaf node (data page) and also intermediate pages are doubly linked, making traversing more easy.
In a complex data table you will have multiple levels of intermediate nodes.

Finally you add the root level to complete the picture:
Note that root node has pointers to intermediate level.


Index Seek Operation:

Let say you want to seek into record 26 - Ethan Hunt. 

SELECT EmpId, Name FROM Employee WHERE EmpId = 26

Following picture shows how the seek path:

As you can see we first come to root, look for 26 and found pointer to first intermediate node, go to first intermediate node, found pointer to second leaf page, go to second leaf page and go to record 26.


Range scan operation:

If we want to get range of records, for example, records from 20 to 40. 

SELECT EmpId, Name FROM Employee WHERE EmpId BETWEEN 20 AND 40

We will traverse index in following way:

We start from the root, find 20s in first intermediate node pointer, go to first intermediate node, find 20s in second leaf node pointer, go to second leaf node find record 21 (because no 20), and start reading from there and read on until the end of the page, then using the link to next leaf page, move to next leaf node and read on until record 38.

Non-clustered Index

Non-clustered index use the same B+ tree structure, but leaf nodes are actually in heap or in index. Let's consider a scenario where we want to create a non-clustered index on same Employee table as above. We will create a non-clustered index on "Name" field.

CREATE NONCLUSTERED INDEX IX_EMPLOYEE_NAME ON Employee(Name)

Leaf level of the non-clustered index will be like below:


Note that, now index rows are sorted on name column and have a pointer to the primary key of the clustered index (i.e. EmpId). In addition to that, non-clustered index can have included columns. For example you can include "Age" column in the index. If you include value of the age will be stored in the non-clustered index. Since non-clustered index carry less columns than the table, one page will be able to hold more rows than the clustered index. This is illustrated in above picture by showing 6 rows in single page.

In above diagram it shows, that how first two rows of the non-clustered index is pointing to its related row in the clustered index (table). For clarity we have only showed first two rows, but all rows in the non-clustered index are pointing to related row in clustered index.

Non-clustered index also have intermediate and root nodes.


Index Seek Operation with Non-clustered Index

For example, let say we want to retrieve record for "Oscar White". If we use non-clustered index, we will have to scan the entire table/index. But if we use non-clustered index on "Name" column, we can directly go to Oscar White as shown on the picture below:

To seek "Oscar White", SQL server start from root node and from root node, it figure out records starting with O are in second intermediate node. Therefore it traverse to second intermediate node. From the second intermediate node, SQL server finds out Os are third leaf node, therefore it goes there and fetch the record. If query required only name and Id, non-clustered index itself will be able to satisfy the query and operation completes. However if query require additional fields which are not in non-clustered index, SQL server can use the pointer in the non-clustered index to go to clustered index to fetch additional columns (this is called key lookup operation).


Covering Index

Covering Index is an index which has all columns that required to satisfy a query without referring to base table. Because covering index satisfy a query by itself, IO requirement to serve the query is less. If all fields to satisfy the query is not in non-clustered index, SQL server has to do key lookups and go to clustered index to pick missing fields.

For example let's consider a query like below:

SELECT EmpId, Name, Age 
FROM Employee
WHERE Name = 'Hanna Lee'

This query looks for employee name "Hanna Lee" and need to find her age as well.

If we have a non-clustered index like one shown in above section (see Non-Clustered Index section), we can find the record easily by seeking to "Hanna Lee" using the index, but since index doesn't have "Age" field, SQL server need to do a key lookup (using the pointer in non-clustered index to clustered index) and get "Age" field value from clustered index.

What if we include the Age column in the index like below:

CREATE NONCLUSTERED INDEX IX_EMPLOYEE_NAME ON Employee(Name)
INCLUDE (Age)

This is a non-clustered index on Name column, but additionally it has included "Age" column. Note that "Age" column is not part of the index key. Therefore, index will not be sorted on Age column. It is just part of the index leaf/data rows.

When query runs SQL server will use our IX_EMPLOYEE_NAME index. It will do a seek on to Hanna Lee's record as shown above. Query need Age value as well, luckily "Age" value is also in the non-clustered index, so SQL server can return the result, without even touching the clustered index.

This module only teach you very basics of the indexes. We will hope to look into move advanced index related topics in next modules.


















09/03/2025

Restoring Encrypted SQL Server DB Backup on different Server

Last week, when I was tasked with moving old server to new server, I was faced with an error that I wasn't familiar. I got this error while I was trying to restore a database backup from old SQL server to new SQL server. Error message told me that database is encrypted and there fore I cannot restore on the new server.

Further analysis showed me that on the old server database was encrypted using TDE.

As a side note, TDE is a security feature in SQL Server that encrypts the database at rest (when it saved on disk). It protects data files by encrypting them on disk, ensuring that even if someone gains access to the database files, they cannot read the data without the proper decryption keys. When a database use TDE, backup files also encrypted.



Above illustration from Microsoft Learn website shows the architecture of the TDE.

Therefore, the first thing we need when restoring this encrypted database on the new server is Database Master Key (DMK).

You can do this my running following TSQL on the new server:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword';

Next, we need certificates. However, creating new certificate on new server won't allow us to decrypt the backup from old server. There fore we need certificate from old server. To be precise, you need backups of the certificate that encrypted the Database Encryption Key of the database, so you can restore it on new server. 

That is why it is vital that you backup your certificates in a safe place (e.g. online data store or vault) right after you create them. Because if your server get crashed, you will never able to restore databases even you have backups of them, without the certificate (if they are TDE encrypted).

Assume you don't have the certificate backed up (or you don't know where they are because some one else has done them and that knowledge is not available to you now). In that case if you still have access to the old server (like in my case above), you can still generate the certificate backup.

First, you need to know the name of the certificate. To get the name of the certificate you can run following TSQL:

SELECT db_name(database_id) AS DatabaseName, c.name AS CertificateName
FROM sys.dm_database_encryption_keys dek
JOIN sys.certificates c ON dek.encryptor_thumbprint = c.thumbprint;

Now you know the name, you can back it up:

USE master;
BACKUP CERTIFICATE MyBackupCert 
TO FILE = 'C:\Backup\MyBackupCert.cer'  
WITH PRIVATE KEY (
    FILE = 'C:\Backup\MyBackupCert.pvk',
    ENCRYPTION BY PASSWORD = 'StrongPassword123'
);

Note that the password you use to generate the master key and this doesn't need to match.

Above query will generate two files:
  • Certificate file (with cer extension)
  • Private Key file (with pvk extension)
Move them to the new server, and also make sure you have noted down the password you used.

Then, create the certificate on the new server:

USE master;
CREATE CERTIFICATE MyBackupCert
FROM FILE = 'C:\Backup\MyBackupCert.cer'
WITH PRIVATE KEY (
    FILE = 'C:\Backup\MyBackupCert.pvk',
    DECRYPTION BY PASSWORD = 'StrongPassword123'
);

Note that you need to use the same password you use to backup the certificate and private key file.

When you restore the certificate from above TSQL, it get encrypted with the master database key of the new server, there fore complete the chain that showed in the TDE architecture diagram.

Now you are ready to restore your database to new server.





28/02/2025

Question on C# version on VS Projects

I have recently inherited a Visual Studio 2022 project, it is a C# Console application on .Net Framework 4.8. So I thought not a very old project what can go wrong.

Well, when I worked on the project, only into first few hours, I have faced following error:

Feature 'nullable reference types' is not available in C# 7.3. Please use language version 8.0 or greater.

I got this error because I tried to use Nullable type in my new code I was developing. But I was thinking, why would this project is using C# 7.3? At the time of writing latest version of the C# was version 13, which was released with .NET 9.0 in 2024. So how come C# version used in relatively new project has taken version 7.3?

To find the answer I have turned to ChatGPT and web. This is what it had to tell:

The default version of the C# depends on the the target framework of your project. Here is the default versions for .NET Framework 4.6.2 and 4.8:

 .NET Framework Version

 Default C# Version

Maximum Supported Version

 .Net Framework 4.6.2

 C# 7.0

 C# 7.3

 .Net Framework 4.8

 C# 7.3

 Latest version as of now


That explained lot. This project was created on .Net Framework 4.8, so it has defaulted to C# version 7.3.

Next question pop-up to me was, then how can I upgrade C# version on this project to more latest version, so I can use new features in the language in my code?

Answer is, you need to manually edit the .csproj file (i.e. Visual Studio project file). You need to insert something similar to below:

<PropertyGroup>
  <LangVersion>8.0</LangVersion>
</PropertyGroup>

If you want to use latest version of the language forever, you can do following:
<PropertyGroup>
  <LangVersion>latest</LangVersion>
</PropertyGroup>

However one of the thing to consider is, although .Net Framework 4.8 support latest version of the language, some of the features require runtime support which is only available in Framework. For example async streams, interface methods require runtime support of .Net Core/.NET 5+.

Unfortunately, there is no setting to tell on Visual Studio project to tell, which version of C# language in use. So if your csproj file doesn't have above property, you need to infer that your project will use the default version of the Framework as the language version.

If you are using more modern Frameworks, here are the default C# versions go with them:
  • .NET 8 => default to C# 12
  • .NET 7 => default to C# 11
  • .NET 6 => default to C# 10

IIS Application Pool Recycling Settings

Recently I had chance to look into IIS's capability to auto recycle app pool. Because one of our web app was hogging the whole server claiming all the memory it can get when it working at its peak. 

When hosting applications on IIS (Internet Information Services), managing application pool recycling is crucial for ensuring stability and performance not only on the target app, but for whole server. Recycling helps refresh the application environment by periodically restarting the worker process to prevent memory leaks, releasing un-wanted memory, resource locking and unexpected issues. 

Recycling Settings in IIS App Pools

Recycling settings are per App Pool. There fore you can configure each pool differently. These settings can be found in Advanced Setting dialog. Right click on the desired Application Pool and go to "Advanced Settings" dialog.

There are lot of "Advanced Settings" therefore, you might have to scroll down to see these settings, they are at the right below the dialog.


"Disable overlapping recycle", "Disabling Recycling for Configuration Changes" (blog done for this), "Generate Recycle Event Log Entry" are settings which are relate to recycling but not something allow us to control recycling times. So we skip them for bit.


1. Regular Time Interval (Fixed Interval Recycling)

  • Setting Name: Regular Time Interval (in Minutes)

  • Default: 0 - means application pool will not recycle regular time intervals

  • Description: Automatically recycles the worker process after a specified time interval.

  • Use Case: Useful for applications that need periodic refreshing to prevent performance degradation. However, setting this too frequently can disrupt user sessions. For example you can set your application pool to restart every 12 hours (720 minute), when you know these 12 hour intervals are not its peak times.


2. Specific Time(s) (Scheduled Recycling)

  • Setting Name: Specific Times

  • Description: Allows recycling at predefined times during the day.


  • Use Case: Ideal for scheduled maintenance windows. For example, in above screen shot we recycle application pool at 00:05 mid night, then 3:05AM (which is probably after maintenance job), then 6.30AM (just before users try to access), 12.45PM (mid day when traffic is low when every one at lunch), 5.30PM (when users logging off), 7PM (before we kick off maintenance tasks).

3. Memory-Based Recycling

a) Private Memory Limit

  • Setting Name: Private Memory Limit (KB)

  • Description: Recycles the worker process when its private memory consumption exceeds a specified threshold.

  • Use Case: Helps prevent excessive memory usage due to memory leaks in applications. For example, if an app is expected to use a maximum of 2 GB, setting a limit slightly above (e.g., 2.5 GB) ensures it gets recycled before causing server slowdowns.

b) Virtual Memory Limit

  • Setting Name: Virtual Memory Limit (KB)

  • Description: Recycles the worker process when its virtual memory usage exceeds a specified limit.

  • Use Case: Less commonly used but can help control applications with excessive virtual memory allocations. Similar to private memory limit, but include shared memory space as well.


4. Request-Based Recycling

a) Request Limit Recycling

  • Setting Name: Request Limit

  • Description: Recycles the worker process after processing a specified number of requests.

  • Use Case: Useful for high-traffic applications where a certain number of requests may cause the application to degrade. For example, if an API handles millions of requests daily, recycling it after every 500,000 requests can help maintain performance.


There are other settings such as "Idle Time" (cause app pool to terminate or suspend when there are no requests), "CPU Limit" which can also effect the application pool recycling/terminating behavior, but we will discuss this in a separate blog.



Best Practices for Application Pool Recycling

Although above helps your recycle the claim back any un-used or over used memory, this process need to be done with care. Because Frequent recycling can disrupt user sessions and degrade performance.

Always try to recycle during off-peak hours. This will minimize disruptions to user session. 

There are settings in "Generate Recycle Event Log Entry" section which allows your to control what recycle action should be logged in Windows Event log. Use this to your advantage and track when is the actual recycling is happening by monitoring the logs for period of time. This will give more insight and use this insight to adjust the recycling settings.






How to find usage information on Github Copilot

Most of you already know, Github copilot is very nice addition to VS code and Visual Studio IDE. Past couple of months, it has been very goo...