24/09/2025

Introduction to SQL Server Transactions (Transaction Isolation Part 3)

This is third part of "Introduction to SQL Server Transaction" series. You can see previous sections below:

Part 1

Part 2

In this part we discuss how SQL server has implemented concurrency control.

Locking and Versioning

SQL server uses following two techniques to implement concurrency control:

  1. Locking
  2. Versioning

Locking

Locking is the traditional mechanism SQL Server uses to isolate transactions.

When a transaction accesses data, SQL Server places locks on the data to prevent other transactions from making conflicting changes. Locking type and granularity decide the effect of the lock and the scale.

Locking Types

There are different types of locks SQL server can placed. Each lock type has some level of restrictions for other transactions. Here is summary of locking types and what it blocks:


We will have a talk about type of locks and locking in detail in future blogs.

Locking Granularity: Locks can be applied at row level, page level, table level, or even database level. Granularity allows SQL server to not to lock more objects than it required.

When lock is placed on row level, only that row is restricted from accessed by other transactions. Other rows are free to read and write operations, from other transactions. This reduce the blocking

Page level locks on the other hand locks all rows in that page from read or modification (depend on lock type). Same with table and database level locks, they lock more rows, hence more data and prone for more blocking issues.

Locking is primarily used in Read Committed, Repeatable Read, and Serializable isolation levels.

Versioning

Versioning uses a multi-version concurrency control (MVCC) approach. Instead of locking data for readers, SQL Server maintains row versions (multiple version of a row) in tempdb, allowing readers to see a consistent snapshot of the data.

When a row is modified, SQL Server keeps an older version in tempdb so that readers can still access the version valid at the start of their transaction or query.

This is a much recent technique and Isolation levels using versioning are Read Committed Snapshot (RCSI) and Snapshot Isolation (SI).

Versioning helps reduce blocking and deadlocks, improving concurrency.


17/09/2025

Recycling IIS App Pools via Scheduled Task

Don't ask me why (it's complicated), but we had a situation where we wanted to recycle several of our production IIS App Pools on one of our web server.

We want to do this as a Windows Schedule Task, because it was required to be done during very early mornings (daily job, executed once).

Our initial plan was to have a PowerShell script and ran it through Windows Task Scheduler.

So we have asked a script from ChatGPT for that (BTW: who write PowerShell script manually these days?)

This is what was provided:

<#
.SYNOPSIS
    Recycles a specific IIS Application Pool.

.DESCRIPTION
    This script recycles a given IIS Application Pool using the WebAdministration module.
    You must run it with Administrator privileges.

.PARAMETER AppPoolName
    The name of the IIS Application Pool to recycle.

.EXAMPLE
    .\Recycle-AppPool.ps1 -AppPoolName "MyAppPool"
#>

param (
    [Parameter(Mandatory=$true)]
    [string]$AppPoolName
)

# Import IIS module
Import-Module WebAdministration -ErrorAction Stop

# Check if the app pool exists
$appPool = Get-ChildItem IIS:\AppPools | Where-Object { $_.Name -eq $AppPoolName }

if ($null -eq $appPool) {
    Write-Host "Application Pool '$AppPoolName' not found." -ForegroundColor Red
    exit 1
}

try {
    Write-Host "Recycling Application Pool: $AppPoolName ..." -ForegroundColor Cyan
    Restart-WebAppPool -Name $AppPoolName
    Write-Host "Successfully recycled Application Pool: $AppPoolName" -ForegroundColor Green
}
catch {
    Write-Host "Failed to recycle Application Pool '$AppPoolName'. Error: $_" -ForegroundColor Red
    exit 1
}


We could run this following manner:
.\Recycle-AppPool.ps1 -AppPoolName "MyAppPool"

One line for each app pool we want to recycle.

But then, when we asked ChatGPT, to give command line for this to run in Windows Task Scheduler, it has provided this:
powershell.exe
-NoProfile -ExecutionPolicy Bypass -File "C:\Scripts\Recycle-AppPool.ps1" -AppPoolName "MyAppPool"

However, proactively, it has provided much simpler approach as well. That was the surprise here.

ChatGPT has suggested to use appcmd.exe.

Never heard that before. Then I realized it is a command line utility which get installed when we install IIS stack.

It was much easier to use it (though it had less error handling compared to PowerShell script).

%windir%\system32\inetsrv\appcmd.exe
recycle apppool /apppool.name:"MyAppPool"

At the end we choose to use appcmd.exe.

That's something we learned, this week. Thanks ChatGPT.



29/08/2025

Using Own API keys in Various IDEs

AI hype is so high these days, every one want best AI models for least cost. Though they don't cost much individually, when you add up cost for each subscription, it is coming to a substantial amount if you very active in AI world.

There fore, I was wondering what kind of support each main AI agent integrated IDE's provide for bringing in you own key (BYOK) - that is your own AI model API key.

Cursor IDE

Cursor allows to bring your own key as of now (Aug/2025). However there are some limitations. As per their website, "Custom API keys only work with standard chat models. Features requiring specialized models (like Tab Completion) will continue using Cursor’s built-in models."

They do support all major model providers (e.g. OpenAI, Google, Claude, Azure, Amazon)

Github Copilot

Github Copilot only support API keys for their organisational clients. You need to buy organisational membership to enable usage of your own API keys.

Here is the link.

Windsurf

Windsurf support BYOK, however, it only support claude models under BYOK settings.

Here is the link.


26/08/2025

How to See Which Certificate Was Used in an Existing Backup


Recently I have encountered a interesting scenario relate to SQL server backups.

In our environment there are few SQL servers are running. They are backed up and databases are also backed up. So everything was running smoothly. Until it's not. One of our servers has crashed.

Well, no one was worried, because we had backups and there were not much of data loss.

So after we rebuild the server (we built it from scratch rather than from backups, because we need to refresh the OS anyway), and after installing SQL server, we tried restoring databases.

Then only everyone realized that, backups were encrypted. I know it is our bad, we should have tested restoring periodically, but in small business like us, that never get happen.

So how do we restore the backups. We needed the DEK (Database Encryption Key) which those backups were encrypted.

Luckily we found, set of certificate backups which were use to encrypt database backups.

Every one was happy.

However, how do we know which certificate to use on this particular server. Name didn't really give us a clue.

So we had to Google/Chat with AI a bit.

That's when we came up following approach.

First you need to restore the backup with just header only.

RESTORE HEADERONLY FROM DISK = 'D:\Backups\MyEncryptedBackup.bak';

This will show, result set similar to below:


This result set have following columns (56 of them):

BackupName

BackupDescription

BackupType

ExpirationDate

Compressed

Position

DeviceType

UserName

ServerName

DatabaseName

DatabaseVersion

DatabaseCreationDate

BackupSize

FirstLSN

LastLSN

CheckpointLSN

DatabaseBackupLSN

BackupStartDate

BackupFinishDate

SortOrder

CodePage

UnicodeLocaleId

UnicodeComparisonStyle

CompatibilityLevel

SoftwareVendorId

SoftwareVersionMajor

SoftwareVersionMinor

SoftwareVersionBuild

MachineName

Flags

BindingID

RecoveryForkID

Collation

FamilyGUID

HasBulkLoggedData

IsSnapshot

IsReadOnly

IsSingleUser

HasBackupChecksums

IsDamaged

BeginsLogChain

HasIncompleteMetaData

IsForceOffline

IsCopyOnly

FirstRecoveryForkID

ForkPointLSN

RecoveryModel

DifferentialBaseLSN

DifferentialBaseGUID

BackupTypeDescription

BackupSetGUID

CompressedBackupSize

Containment

KeyAlgorithm

EncryptorThumbprint

EncryptorType


Last two columns, EncryptorThumbprint and EncryptorType will tell you which certificate has been used.

Something I didn't know before.

30/07/2025

Introduction to SQL Server Transactions (Transaction Isolation Part 2)

This is second part of "Introduction to SQL Server Transaction" series. You can see previous section here.

In previous module we learned about basics of SQL Server Transaction and properties of the Transactions.

In there, we discussed that among the four ACID properties of a transaction, Isolation is the one that can be modified in SQL Server. In this module, we will delve deeper into the Isolation property to understand its significance.

Why Do We Need Different Isolation Levels? The Concurrency Conundrum



When multiple transactions run simultaneously, they can interfere with each other in undesirable ways. Here are few of those scenarios you might encounter:


1. Dirty Reads: 


Transaction B reads data that Transaction A has changed, but Transaction A hasn't committed (saved) yet. If Transaction A then rolls back (undoes its changes), Transaction B has read data that technically never existed (i.e. dirty data).

Analogy: Let us take a look at Bank Money transfer example again. Transaction A is doing a money transfer between 2 accounts. But before it commit (save) its changes, Transaction B is reading account balances for a report that manager wants. 
  • If there are no isolation between transactions, and if Transaction A fails before it commit its changes, Transaction B has read wrong data for the report. Therefore, this is called Dirty Reads, which will leads to wrong report output.
  • If there are isolation between transaction, transaction B (report) will have to wait till transaction A completes and then read data. But that means, manager will need to wait bit longer to get his report prepared.

2. Non-Repeatable Read: 



Transaction A reads some data. Transaction B then updates or deletes that specific data and commits its changes. If Transaction A reads the same data again, it gets a different value (because values are updated) or finds the data missing (because particular row is deleted).

Analogy: In our banking example, Transaction A reads balance of a person and do some calculation to make some decision (for example to see eligibility for bonus interest). Transaction A will make its decisions based on the values it read, if it decide this person is eligible to bonus interest then it will re-read the balance to add the interest. But before Transaction A re-reads, Transaction B deduct the balance of the same person and commit (save) values to database. New balance could be not eligible for bonus. This is called non-repeatable read, because Transaction A couldn't re-read the value it read earlier.


3. Phantom Read: 


Transaction A reads a set of rows based on some condition (where clause). Transaction B then inserts a new row that meets that same condition (where clause) and commits to the database. If Transaction A runs the same query again, it sees a new "phantom" row that wasn't there before.

Analogy: In our banking example, Transaction A reads accounts with high values (e.g. higher than 100000) for a report. Then Transaction B update an account which was not in Transaction A's list and increase that account balance to over 100000. Now this account also matches the condition. If Transaction A re-reads accounts again with the same condition (for example let us say for sub section of a report it was doing), it finds a new account which was not there before (which might leads to confusing results in report).

Isolation levels are SQL Server's way of letting you decide which of these phenomena you are willing to tolerate in exchange for better performance and concurrency. Stricter levels prevent more phenomena but can cause more blocking (transactions waiting for each other). So it is a tread-off between concurrency and data integrity.

In next part of this series, let us take a look at how (what techniques are used) isolation is implemented on SQL Server.




16/07/2025

PostgreSQL: Basic Operations using DBeaver - Part 1

Once you restore a database and create a connection to it, next thing you want to do is have a look at the table structure and data inside those tables.

I'm going to use DBeaver for my database access/operations. Because it is very sophisticated tool, similar to SSMS for SQL server, but I think it has more options.

Let's take a look at how these basic operations are carried out with help of DBeaver tool.


In the "Database Navigator" panel of DBeaver, expand the database you want to access. Under database node, you will find "Schema" node (see picture above). Inside this node you will find all available schemas, in most cases it will be under public schema.
Under the schema, you will find usual database objects such as Tables, Views, Functions and etc.

If you want to see data in table, you can double click on it or you can right click and select "View Table" from the context menu.

This will open table in right hand side pane.


By default this shows first 200 rows in the table, with all columns in a grid view. If you want a text view (in case need to copy records into some where), you can switch to text view.


If you click on arrow icon on a column (in grid view), you get sorting and filtering options for that column.


Filter bar at the top shows current filters:


You can clear them all by clicking on eraser like icon the right side of the filter bar. You can further configure your filters by clicking "filter" icon on the right side.

Bottom bar shows very helpful buttons to interact with the table data.



There are buttons to add/delete/edit records in the table. Then you can export data in table by pressing "Export data" button. Next it shows number of records currently in the grid, followed by total number of records. However, when you initially load the table, it just shows 200+, because it has not counted all rows. If you want to know the total count of records you can click on the button in between two counts.

As you can see DBeaver provide rich set of GUI features to interact with data in your database. Of course you do all of these using plain SQL also.

We will see further feature in another article.







30/06/2025

PostgreSQL - Restoring a database

Recent days I was following Brent Ozar's site on postgresql - smart postgres.

In his articles and classes he use copy of stackoverflow database (postgres version). There fore I wanted to restore it on my test postgres server.

Here is how I did it.

Download the Dump

First I downloaded the stackoverflow data dump (which was created by Brent) using links in his site. See this page for links to data dump torrent and instructions on restoring and configuring it. 

I choose small version of the data dump (which expand to 6GB, but torrent is about 1GB).

Create a Database

In this scenario, I have used DBeaver to help me with database. In DBeaver, created a new database connection.


See above screenshot for settings I have used. I have kept most settings default, but made sure to tick "Show all databases" tick box, this allows me to see all database in addition to the one you specified in the connection.

Once connection is created, select the database node and right click on it. Select "Create New Database" menu item.


Create database dialog appear and enter the name "stackoverflow" in the database name box. Keep all other settings default and press ok.

Your new database will appear under the database node:


Restore

Right click on the newly created database and select Tools > Restore


This will popup the restore dialog. In restore dialog, browse to the downloaded data dump (.sql) file and make sure to "Discard object owners" tick box.

This will make sure some errors are by passed. Due to the way dump was created there are some mis match of owners. This is explained in Brent's page, but he has advice to ignore them, by ticking above box those errors a skipped,


Then press on "Start" button.

Confirm your request:


Progress will appear on the dialog box and depending on the power of your machine it will take about 2-10 minutes to restore.


Once finished, press cancel on the dialog.

Now you will be able to see stack overflow tables on the database:




11/06/2025

How to find usage information on Github Copilot


Most of you already know, Github copilot is very nice addition to VS code and Visual Studio IDE. Past couple of months, it has been very good coding assistant to me in all coding projects, specially in Visual Studio 2022.

I was using Free plan for Github copilot ever since I have started using it. Limits on free plan was enough for me to work on project I worked in past. However, last couple of week development work has increased, there fore I was wondering whether I'm hitting my free plan limits on Github copilot.

When you hover our Copilot icon on Visual Studio 2022, it give following options:


If you click on the "Copilot Free Status" menu, you get something like below:


It just says, when the free limits will reset (monthly). So how to find out how much you have already used.

This is when ChatGPT with search tool came in handy. Following procedure describe how to find the free limit. It is not very user friendly, but for developers, this is not a complicated task.

Step 1: Login to your Github account and go to following URL (preferably on Chrome or Edge): 

https://github.com/settings/copilot


Step 2: Open Developer Tools

  • Chrome/Edge: Press Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (macOS)

  • Firefox: Ctrl+Shift+K (Windows/Linux) or Cmd+Option+K (macOS)


Step 3: 
Switch to the Network Tab

In Developer Tools, click on the Network tab and ensure “Preserve log” is enabled to keep track of activity when the page reloads.


Step 4: Reload the Page

Hit F5 or reload the browser page. This captures all network requests, including the entitlement API call.



Step 5: Filter Requests

In the Network filter bar, type entitlement to locate the key request:



Step 6: Inspect the Payload
  • Click the request to open the Headers / Response pane.

  • Go to the Response tab — it should display a JSON object detailing your usage quotas and how much remains.


As you can see in above screen shot, json response show the remaining entitlement. To be clear in above example, account has not used any of his/her entitlement.







31/05/2025

Introduction to SQL Server Transactions (Transaction Isolation Part 1)

What is a Transaction


First, let's take a look at what is a Transaction in SQL Server?

Imagine you need to perform several related database operations that must either all succeed or all fail together. For example, transferring money from one bank account to another. This involves two main steps:

1. Deduct the amount from the source account.

2. Add the amount to the destination account.

If step 1 succeeds but step 2 fails (maybe due to a network error or server crash), you'd end up with money disappearing from the source account but not appearing in the destination account! That's a disaster.

A SQL Server transaction groups these multiple steps into a single, logical unit of work. You tell SQL Server "Start a transaction here (BEGIN TRAN)", perform all the steps, and then say "Okay, everything worked, make it permanent (COMMIT TRAN)". If something goes wrong during the steps, you say "Cancel everything since I started (ROLLBACK TRAN)", and SQL Server undoes any changes made within that transaction.

You might already hear about ACID properties of a transaction. ACID properties are the 4 pillars of a transaction, which allows SQL Server (and other relational databases) to implement the feature correctly.

So what are these ACID properties. ACID stands for:

  • A - Atomicity
  • C - Consistency
  • I - Isolation
  • D - Durability

Atomicity

Think of it: "All or Nothing."

What it means: An atomic transaction is treated as a single, indivisible unit. Either all the operations within the transaction complete successfully and are permanently recorded in the database, or none of them are. If any part of the transaction fails, the entire transaction is cancelled, and the database is returned to the state it was in before the transaction started. This is called rolling back the transaction.

Why it's important: Prevents partial updates that could leave your data in an inconsistent or incorrect state.

SQL Server: SQL Server uses the transaction log to keep track of all changes made within a transaction. If a transaction needs to be rolled back, SQL Server uses the log to undo those changes. If an error occurs during a transaction, SQL Server will often automatically initiate a rollback, or you can explicitly issue a ROLLBACK TRAN command.

Example (Bank Transfer):
○ You start the transfer transaction (BEGIN TRAN).
○ You successfully deduct $100 from Account A.
○ BUT before you can add $100 to Account B, the database server crashes.
Because of Atomicity, when the server restarts and recovers, it sees that the transaction wasn't completed (it wasn't COMMITted). It will use the transaction log to undo the deduction from Account A. Account A's balance will be back to what it was before the transaction. It's as if the transaction never happened.


Consistency

Think of it: "Valid State to Valid State."
What it means: A transaction brings the database from one valid state to another valid state. This doesn't mean the transaction logic itself is perfect (you could accidentally transfer the wrong amount!), but it ensures that the transaction adheres to all defined database rules and constraints. These rules include things like:
○ Constraints: PRIMARY KEY, FOREIGN KEY, UNIQUE, CHECK constraints (e.g., Balance must be >= 0).
○ Data Types: Ensuring you don't try to put text into a number column.
○ Triggers: Database logic that automatically runs on inserts, updates, or deletes.
If a transaction would violate any of these rules, it cannot be committed, and it will be rolled back.
Why it's important: Maintains the integrity of your data and enforces the structure and business rules defined within the database schema.
SQL Server: SQL Server automatically checks for constraint violations and executes triggers as part of the transaction. If a violation occurs, the transaction fails.
Example (Bank Transfer):
○ Let's say you have a CHECK constraint on the Balance column of your Accounts table that says Balance >= 0.
○ You try to transfer $200 from Account A, which only has $150.
○ You start the transaction (BEGIN TRAN).
○ You deduct 200 from Account - A. Account A′s balance temporarily becomes −50 within the transaction's scope.
○ When you try to COMMIT TRAN, SQL Server checks all constraints. It sees that Account A's balance is negative, which violates the CHECK constraint.
○ Because of Consistency (and the constraint), the COMMIT fails, and the transaction is automatically rolled back. Account A's balance returns to $150. The database remains in a valid state where no account has a negative balance.


Isolation

Think of it: "Transactions Don't Step on Each Other's Toes."
What it means: Multiple transactions running at the same time should not interfere with each other. From the perspective of one transaction, it should appear as if it is the only transaction running on the database. This prevents various concurrency problems (like one transaction reading data that another transaction is changing but hasn't committed yet, or two transactions trying to update the same data simultaneously in a conflicting way).
Why it's important: Allows multiple users or applications to access and modify the database concurrently without causing errors or returning incorrect results.
SQL Server: SQL Server provides different Isolation Levels (like READ COMMITTED, SNAPSHOT, etc.) which control how much isolation you get. Higher isolation levels provide stronger guarantees but can sometimes impact performance because SQL Server might need to use more locks or versioning to keep transactions separate. The default level in SQL Server is READ COMMITTED, which prevents reading data that another transaction has modified but not yet committed (known as "Dirty Reads").
Example (Bank Transfer):
○ Transaction 1 starts to transfer $100 from Account A to Account B. (BEGIN TRAN, deducts from A, prepares to add to B).
○ At the exact same time, Transaction 2 starts to read the balances of Account A and Account B to generate a report.
○ Because of Isolation, Transaction 2 will typically not see the temporary state where Account A has been debited but Account B hasn't been credited yet (especially with the default READ COMMITTED isolation level). Transaction 2 will likely see the balances as they were before Transaction 1 started, or it might wait until Transaction 1 is fully committed before reading. This ensures Transaction 2 gets a consistent view of the data, even though it's running concurrently with a transaction that's modifying that data.


Durability

Think of it: "Changes are Permanent, Even After a Crash."
What it means: Once a transaction has been successfully committed, its changes are permanent and will survive even if the database server crashes, loses power, or restarts immediately after the commit. The committed data is stored in a way that guarantees it won't be lost.
Why it's important: Ensures that once a user or application gets confirmation that a transaction is complete (e.g., "Your transfer is successful"), they can trust that the changes have truly been saved and won't disappear.
SQL Server: When you COMMIT TRAN, SQL Server ensures that the record of the transaction's changes is written to the transaction log on disk. Writing to the log is typically faster than writing the actual data pages to disk. Even if the server crashes after the log record is safely written but before the changes are written to the main data files, SQL Server can use the transaction log during startup recovery to redo the committed transaction and ensure the data files reflect the committed state.
Example (Bank Transfer):
○ You successfully complete the transfer: $100 is deducted from Account A, and $100 is added to Account B.
○ You issue COMMIT TRAN. SQL Server confirms the commit back to your application.
○ Immediately after receiving the confirmation, the power goes out, and the server shuts down.
○ Because of Durability, when the server is restarted, SQL Server performs a recovery process. It reads the transaction log, sees that your transfer transaction was committed, and ensures that the changes (deducting from A and adding to B) are fully applied to the actual data files on disk. When you next check the balances, they will correctly reflect the transfer.


Next


ACID properties are the backbone of reliable database systems like SQL Server

Out of all 4 of these properties, Isolation is the property we can configure mostly on SQL Server.

Atomicity and Durability is automatically implemented in the SQL server core code to make sure all transactions are adhere to them (otherwise no proper transaction). Although we define rules (business rules via constraints and triggers) for Consistency, enforcing them (once defined and enabled) is automatic (no intervention required from us).

However, Isolation property is much more configurable. This is because there is a trade-off between high Isolation and performances. Therefore, SQL server allows user to choose several pre-defined Isolation levels based on performance requirements.

So let's explore more on Isolation levels in our next module.


13/05/2025

Issue Opening OneDrive Vault - Solution

Recently I have an issue where my OneDrive "Vault" is not opening.

This is the error I was getting:


It says "We couldn't unlock your Personal Vault", "We encountered an unexpected error and could not unlock your Personal Vault. Please try again. You can also access you Personal Vault on OneDrive.com".

As per error message, I couldn't open (Unlock) my OneDrive Vault on desktop (windows desktop app). Rest of the OneDrive was working fine, I could sync rest of the folder through desktop app without any issue. Online access also fine, and vault was opening on Online app. So it is clear, something was not ok on desktop app.

Unfortunately there were no error log on Event viewer or any other usual places.

Googled for solution, but couldn't find anything directly relate to this issue. Most issues were relate to OneDrive desktop app not being able to sync.

There fore, I have tried few things mentioned on those articles.


Unlink and Re-Link

First thing I tried was "Unlink" account from desktop app and re-add it. You can do this in Settings of the app in Account section (see below). However, note this will re-sync all your files. So if you have lot of files in your OneDrive prepare for long sync time.


Un-install and Re-Install

Second option was to un-install the OneDrive desktop app and re-install it.

You can un-install the OneDrive desktop app using the usual way you un-install any other app in windows.

Go to Settings > Apps > Installed Apps and un-install the app. Once completely un-installed, re-start the machine and install it by downloading the setup file (here is a link to OneDrive releases).


Un-block Network

Then you got to try un-block your network connection, in case there are something blocking the authentication to "Vault". For example, try disabling firewall and test (make sure to re-enable it again if it didn't work, if it work you will have to find which port/application getting blocked by the firewall).

Another advice is to try disabling anti-virus software and test. Again make sure to re-enable it again.

Make sure proxies or vpn you connected are not blocking the OneDrive.

For me none of these worked. So my final option was to contact Microsoft Support (I'm a paid Office 365 customer, so I'm able to get support from them).

First few letters from Microsoft support was not very useful as I have already done basic things which I covered above.


Clear Credential Cache

Next what Microsoft has suggested is following:

  1. Press the Windows key  + R to open a "Run" dialog.
  2. Enter the path %localappdata%\Microsoft\OneDrive\settings and select OK.
  3. Delete the PreSignInSettingsConfig.json file.
  4. Restart the machine
They said this would clear the credential cache.
But for me it didn't really work.


Bypass the Issue

Ultimately they have acknowledge that the problem I'm facing is part of ongoing issue, which their Engineering team is trying to resolve.

Until fix is implemented they have suggested below steps:

  1. Exit the OneDrive desktop app.
  2. Open command line via start menu and run the command- REG DELETE HKCU\Software\Microsoft\OneDrive\Accounts\Personal /v VaultBackupGuid
    1. Its Ok if it returns Error: The system was unable to find the specified registry key or value.
  3. Open credentials manager via start menu.
  4. Select the Windows credential tab.
  5. Find the credential named “Microsoft OneDrive Generic Data- Personal vault VHD info” and click Remove.
  6. Restart OneDrive and then try to unlock the vault. 

This has actually fixed my issue.

Hope this will help someone going through the same issue.





01/05/2025

DBeaver Series - Part 2 - Connecting to Database

In my last blog post on DBeaver series - Part1, I was confused why my custom database was not shown on the DBeaver, even though I have connected to the localhost server.

After trying few things, I realized that connections in DBeaver is per database.

So here is how you create a connection to new database.

In DBeaver, in the tool bar, click on the new connection icon (or you can use File > New menu items to do the same).


You will be presented with new connection dialog box:


Select the type of the database you want to connect. I have selected PostgreSQL. Then click next.

Connection settings dialog box opens:


Note that this is a very complex connection setting dialog box with hundreds of configuration settings. However you only require to configure very few things for default connections.

Settings dialog has for 5 tabs.

  • Main
  • PostgreSQL
  • Driver Settings
  • SSH
  • SSL
We only consider "Main" tab in this blog, in future date we will discover what other settings are.

In "Main" tab, first thing you need to specify is "Server", i.e. which server you are going to connect to. You can do this via two way:
  • Host - specify host name and port as separate settings
  • Url - specify the connection as Url
Interestingly there is "Show All Databases" tick box in there, we'll tick this and see what will happen.

As host, you need to specify host name, database name and port number.

In Authentication section, we select Database Native mode. Other options are not considered in this blog post scope.

As authentication parameters, you need specify user name and password. You have the option to save the password.

Once you done with all you settings, click on the "Finish" button.

If all settings are correct you will be connected to the database:


Note that since we ticked "Show All Databases" box, connection shows all databases, but you can only connect to the database you specified.

Introduction to SQL Server Transactions (Transaction Isolation Part 3)

This is third part of "Introduction to SQL Server Transaction" series. You can see previous sections below: Part 1 Part 2 In this ...