30/07/2025

Introduction to SQL Server Transactions (Transaction Isolation Part 2)

This is second part of "Introduction to SQL Server Transaction" series. You can see previous section here.

In previous module we learned about basics of SQL Server Transaction and properties of the Transactions.

In there, we discussed that among the four ACID properties of a transaction, Isolation is the one that can be modified in SQL Server. In this module, we will delve deeper into the Isolation property to understand its significance.

Why Do We Need Different Isolation Levels? The Concurrency Conundrum



When multiple transactions run simultaneously, they can interfere with each other in undesirable ways. Here are few of those scenarios you might encounter:


1. Dirty Reads: 


Transaction B reads data that Transaction A has changed, but Transaction A hasn't committed (saved) yet. If Transaction A then rolls back (undoes its changes), Transaction B has read data that technically never existed (i.e. dirty data).

Analogy: Let us take a look at Bank Money transfer example again. Transaction A is doing a money transfer between 2 accounts. But before it commit (save) its changes, Transaction B is reading account balances for a report that manager wants. 
  • If there are no isolation between transactions, and if Transaction A fails before it commit its changes, Transaction B has read wrong data for the report. Therefore, this is called Dirty Reads, which will leads to wrong report output.
  • If there are isolation between transaction, transaction B (report) will have to wait till transaction A completes and then read data. But that means, manager will need to wait bit longer to get his report prepared.

2. Non-Repeatable Read: 



Transaction A reads some data. Transaction B then updates or deletes that specific data and commits its changes. If Transaction A reads the same data again, it gets a different value (because values are updated) or finds the data missing (because particular row is deleted).

Analogy: In our banking example, Transaction A reads balance of a person and do some calculation to make some decision (for example to see eligibility for bonus interest). Transaction A will make its decisions based on the values it read, if it decide this person is eligible to bonus interest then it will re-read the balance to add the interest. But before Transaction A re-reads, Transaction B deduct the balance of the same person and commit (save) values to database. New balance could be not eligible for bonus. This is called non-repeatable read, because Transaction A couldn't re-read the value it read earlier.


3. Phantom Read: 


Transaction A reads a set of rows based on some condition (where clause). Transaction B then inserts a new row that meets that same condition (where clause) and commits to the database. If Transaction A runs the same query again, it sees a new "phantom" row that wasn't there before.

Analogy: In our banking example, Transaction A reads accounts with high values (e.g. higher than 100000) for a report. Then Transaction B update an account which was not in Transaction A's list and increase that account balance to over 100000. Now this account also matches the condition. If Transaction A re-reads accounts again with the same condition (for example let us say for sub section of a report it was doing), it finds a new account which was not there before (which might leads to confusing results in report).

Isolation levels are SQL Server's way of letting you decide which of these phenomena you are willing to tolerate in exchange for better performance and concurrency. Stricter levels prevent more phenomena but can cause more blocking (transactions waiting for each other). So it is a tread-off between concurrency and data integrity.

In next part of this series, let us take a look at how (what techniques are used) isolation is implemented on SQL Server.




16/07/2025

PostgreSQL: Basic Operations using DBeaver - Part 1

Once you restore a database and create a connection to it, next thing you want to do is have a look at the table structure and data inside those tables.

I'm going to use DBeaver for my database access/operations. Because it is very sophisticated tool, similar to SSMS for SQL server, but I think it has more options.

Let's take a look at how these basic operations are carried out with help of DBeaver tool.


In the "Database Navigator" panel of DBeaver, expand the database you want to access. Under database node, you will find "Schema" node (see picture above). Inside this node you will find all available schemas, in most cases it will be under public schema.
Under the schema, you will find usual database objects such as Tables, Views, Functions and etc.

If you want to see data in table, you can double click on it or you can right click and select "View Table" from the context menu.

This will open table in right hand side pane.


By default this shows first 200 rows in the table, with all columns in a grid view. If you want a text view (in case need to copy records into some where), you can switch to text view.


If you click on arrow icon on a column (in grid view), you get sorting and filtering options for that column.


Filter bar at the top shows current filters:


You can clear them all by clicking on eraser like icon the right side of the filter bar. You can further configure your filters by clicking "filter" icon on the right side.

Bottom bar shows very helpful buttons to interact with the table data.



There are buttons to add/delete/edit records in the table. Then you can export data in table by pressing "Export data" button. Next it shows number of records currently in the grid, followed by total number of records. However, when you initially load the table, it just shows 200+, because it has not counted all rows. If you want to know the total count of records you can click on the button in between two counts.

As you can see DBeaver provide rich set of GUI features to interact with data in your database. Of course you do all of these using plain SQL also.

We will see further feature in another article.







Using Own API keys in Various IDEs

AI hype is so high these days, every one want best AI models for least cost. Though they don't cost much individually, when you add up c...