Showing posts with label TSQL. Show all posts
Showing posts with label TSQL. Show all posts

10/01/2025

Introduction to SQL Server Statistics - Tutorial

Wishing you all, my loving readers, Happy New Year 2025!

This is the first blog for the year 2025. As a new initiative, I'm going to (try to) write some tutorials on some selected features of the SQL server. This is my own take of these features, and written using knowledge I acquire over the years. Hope this will give something for the society. I'm also hoping to create a Youtube video associate to this. When I done, I will update the page to link to that.

Introduction to SQL Server Statistics

What are SQL Server Statistics

SQL Server statistics are mainly used by Query Optimizer, therefore before we talk about statistics, we need to know little about how SQL Server Query Optimizer works.

When running a query, there can many ways to execute it (different ways to get data, different ways to join them, etc.). Therefore SQL server need a plan before executing a query. We call it Query Plan.

Since there are can be many combinations of ways to execute a query, there can be lot of plans that optimizer can come up for one query.

Therefore, need to find the optimized plan, the best plan (or rather good enough plan …). Optimizer only have limited time to come up with a plan. It has to finish and present the plan quickly as possible. In this process optimizer, will try to find good enough plan quickly as possible. Hens, it need all the help it can get. 


In order to create the optimal plan, Query Optimizer analyse the query and then it need to know meta data about data that query is looking for.

For example query can be looking for total number of orders for one particular customer placed during a date range. To create plan for this query, optimizer need to look into orders table and customer table. 

It need to know roughly how many orders are placed during the given time period. Because depend on the number of rows we might need to do different things to aggregate the data. For example, if we find 100 rows, we can aggregate one way and if find 1M rows, we might not be able to use the same approach. 

It also need to allocate memory for query to run. In order to estimate amount of memory to allocate, it need to estimate how many rows it is dealing with.

It cannot count rows, because that will take lot of time. Optimizer need to report back to SQL server quick as possible.

So it need to find out how data is distributed on the order table. We call this cardinality estimate. This is where statistics come to play. Statistics hold how data is distributed in a table/column.

Creation of SQL Server Statistics

Statistics are create on indexes and columns. It can be on single column or combination of columns.
Two types of statistics
• Statistics create for index (can be multi columns)
• Statistics created for columns (only created on single column)

Statistics have 3 components
• Header -> meta information about statistics
• Density -> mathematical construct of the selectivity of column or column
• Histogram -> show how data is distributed

You can see statistics information by running following command

DBCC SHOW_STATISTICS(MyTable, MyIndex);


Let's create "Employee" table, which have Employee Id, Age, Salary and Gender as an example:
CREATE TABLE Employee (
    EmpId INT PRIMARY KEY, -- Primary Key column
    Age INT,    -- Integer column for age
    Salary NUMERIC(18,2),  -- Numeric column for salary with precision 18 and scale 2
    Gender VARCHAR(10),-- Gender of the employee
);

We will populate the table with some random data:
-- Insert 1000 random rows into the Employee table with salary rounded to the nearest 500
SET NOCOUNT ON;

DECLARE @i INT = 1;

WHILE @i <= 1000
BEGIN
    INSERT INTO Employee (EmpId, Age, Salary, Gender)
    VALUES (
        @i,  -- EmpId (using sequential numbers for simplicity)
        FLOOR(RAND(CHECKSUM(NEWID())) * (70 - 18 + 1)) + 18,  -- Age: random number between 18 and 70
        ROUND(CAST((RAND(CHECKSUM(NEWID())) * (100000 - 15000) + 15000) AS NUMERIC(18,2)), -2),  -- Salary: rounded to nearest 500
        CASE 
            WHEN RAND(CHECKSUM(NEWID())) < 0.4 THEN 'MALE'        -- 40% chance for MALE
            WHEN RAND(CHECKSUM(NEWID())) < 0.8 THEN 'FEMALE'     -- 40% chance for FEMALE
            WHEN RAND(CHECKSUM(NEWID())) < 0.9 THEN 'DUAL'       -- 10% chance for DUAL
            ELSE 'UNKNOWN'                                       -- 10% chance for UNKNOWN
        END -- Gender
    );

    SET @i = @i + 1;
END;

Create indexes:
CREATE INDEX IX_Employee_Age ON Employee(Age);

-- Create an index on the 'Salary' column
CREATE INDEX IX_Employee_Salary ON Employee(Salary);

-- Create an index on the 'Gender' column
CREATE INDEX IX_Employee_Gender ON Employee(Gender);

Let's check out stats now:
DBCC SHOW_STATISTICS(Employee, IX_Employee_Age);



Let's deep dive into each section in the statistic data structure:

Header
Header contains name of the statistics, when it was last updated, row count AT THE TIME of the statistic creation
• Name -> name of the index
• Updated -> when this was last updated
• Rows -> Number of rows at the time of the statistic creation/update
• Rows Sample

Density
How much variety (selectivity). Unique index has very high selectivity.
Density is a measure that provide insight into the selectivity of a column or combination of columns in an index

Density = 1 / Number of Distinct Values

Let's have a look at density data for Age index:



There are 53 distinct age values in Employee table (in the example data I have, this might be slightly vary on yours). So Density = 1/53 => 0.0188679245283019
But if you add primary key EmpId to that column, Density is => 1/1000 (because empid is unique in that table and we have 1000 rows)

If we look at density data for Gender index (where only 4 possible values)



So high density means less selectivity, optimizer will use this data to choose the correct indexes.
"Average Length" column in this vector shows average of total length of fields in consideration. E.g. in above Gender column in 5 characters in length.

Histogram

Let's take a look at histogram for the Salary Index: 

DBCC SHOW_STATISTICS(Employee, IX_Employee_Salary);

SELECT * FROM Employee WHERE Salary BETWEEN 80600.00 AND 81500.00
ORDER BY Salary

Let's take a look at data for 81500 bucket from the table



Histogram is the most used information in statistics.
Histogram in SQL server statistics is a data structure which describe the distribution of data in a column or index
• Histograms main structure contains up to 200 steps called, buckets
• If NULL values are allowed there is a special bucket for NULLs so 201 buckets
• If number of rows are below 200, steps will be lower than 200

• RANGE_HI_KEY -> Highest value in the range (81500)
• EQ_ROW -> Rows that equal to highest value (1 for above example for 81500 bucket, but 2 for 80600 bucket)
• RANGE_ROWS -> Number of other values between high value and previous high key (i.e. 80600) -> in above example 7 (80800, 80900, 81100, 81300, 81400)
• DISTINCT_RANGE_ROWS -> distinct values in range rows.
• AVG_RANGE_ROWS -> RANGE_ROWS/DISTINCT_RANGE_ROWS (i.e. 7/5 = 1.4)

Note that values are considered from high-key (bottom to top)

Creation of Statistics

Indexed Columns (Rowstore):
Creation of statistics in indexed column is automatic and cannot be turned off.

Non Indexed Columns
By default SQL server create stats for non-indexed columns if they are used as filtering columns in queries.
But you can turn this off

In order to see auto create stats is turned on or off

SELECT DATABASEPROPERTYEX('<<YOUR DB>>, 'IsAutoCreateStatistics')

OR

SELECT is_auto_create_stats_on 
FROM sys.databases 
WHERE name = 'YourDatabaseName'

In order to enable or disable auto create stats:
ALTER DATABASE <<your database>> SET AUTO_CREATE_STATISTICS ON/OFF

These auto-created statistics are single-column only and are named in the format: _WA_Sys_columnname_hexadecimal

To see created stats:

SELECT * FROM sys.stats 
WHERE object_id = OBJECT_ID('<<your table name>>')

SELECT * FROM sys.stats 
WHERE object_id = OBJECT_ID('dbo.Employee')



In this screenshot "PK__Employee_..." is the stats for primary key index, which will be created automatically and cannot be turned off. Same with all IX stats, those are stats for non-clustered indexes.

_WA_Sys one is the one created automatically.
Why they are prefix with _WA? Well there is no official statement from Microsoft, so some believe, it is stands for Washington, where Microsoft headquarters are in.

Currently there are no user created stats in there.

If you want to create statistics on a column (which is not normally recommended):
CREATE STATISTICS <<statistics name>>
ON <<table name>> (<<column name>>)
WITH [Options];

CREATE STATISTICS _mpa_Employee_Age
ON dbo.Employee (Age)
WITH FULLSCAN;

If we check stats for Employee tables now:



Note that, new manually created stats has "user_created" field set to true.

Update Statistics

SQL server update stats automatically. However, there might be occasions where updating statistics manually will give optimzer better advantage.

You can update stats using following T-SQL:
UPDATE STATISTICS <<table name>> <<stats name>>
[WITH [FULLSCAN | SAMPLE n PERCENT | SAMPLE n ROWS]];

Examples:
This update stats for all statistics in a table
UPDATE STATISTICS dbo.Employee

Alternatively, you can create management task to maintain statistics.



You can update statistics for all tables using following SQL:
EXEC sp_updatestats;

Remove Statistics

Though we don't really want to remove stats, in case it is required:

DROP STATISTICS <<table name>>;

DROP STATISTICS dbo.Employee

03/09/2024

Two Digit Year Cut-off Setting in SQL Server

I have recently come across that SQL Server has setting which controls how it interpret year number when specified as two digits.

Since year number contains 4 digits (e.g. 2024), if user specify year as two digit (e.g. 24) SQL server need to know a way to identify how to interpret 2 digit year as a full 4 digit year. For example 24 can be year 1924 or year 2024 or even year 2124.

Default setting for this is 2049, which means two digit years are interpret as years from 1950 to 2049.

00 => Interpret as => 2000

20 => Interpret as => 2020

49 => Interpret as => 2049

50 => Interpret as => 1950

78 => Interpret as => 1978


This setting can be change in Server Properties.

Right click on the SQL server node, and select properties. Then go to "Advanced" section.



Or Using TSQL.

USE AdventureWorks2022;  

GO  

EXEC sp_configure 'show advanced options', 1;  

GO  

RECONFIGURE ;  

GO  

EXEC sp_configure 'two digit year cutoff', 2030 ;  

GO  

RECONFIGURE;  

GO


Note setting is in advanced settings section.

Above code will set the two year cut off setting to 2030, which means from year 1931 to 2030.

It is always recommended to use 4 digit in your code to avoid ambiguity.

Reference -> Microsoft Books Online


25/08/2024

SSMS Tips and Tricks - Part 01

I have been using SSMS for so many years now, but here is a trick I learned recently.

Let say you are working on a query window (not query designer), probably with a existing query and you need to add a table with very large number of columns and select all columns by name (rather than using SELECT *).

What is the best way to do this? You can type the query by hand and write all those column names one by one. But that would be laborious/tedious.

Here is a better approach:

Type the query in SELECT * <<table>> format.

Now select the whole line in query window and press Control + Shift + Q key combination.

Viola: Query designer window appear with your query in designer window.

Now, if you just want to add all column to query window, just press ok on Query Designer window.

Query window's query will update to have all column names:




Or you can further edit the query in Query designer window (e.g. add another table) then press ok button when you finish. What ever the query you designed in designer window will write into your query window.

That was very convenient and saved lot of time for me last week.

Also, did you know you can right click on a table in Query Designer and select all column in that table.





24/06/2024

APPLY Operator in T-SQL

APPLY operator was introduced with SQL Server 2005 edition and was great help to join records set with table value function or table value expression.

There are two APPLY operators.

1. CROSS APPLY

2. OUTER APPLY

In simplest terms, CROSS APPLY behave like INNER JOIN and OUTER APPLY behave like LEFT OUTER JOIN.

Let's consider a following example table structure

-- Employee Table
CREATE TABLE Employee (
    EmpId INT PRIMARY KEY,
    EmployeeName VARCHAR(100),
    Age INT,
    BranchId INT
);
-- Branch Table
CREATE TABLE Branch (
    BranchId INT PRIMARY KEY,
    BranchName VARCHAR(100)
);

Here is my test data:

Branch Table


Employee Table


I have a table value function which brings branch id and average age of employees in that branch.
CREATE FUNCTION GetAverageEmployeeAgeByBranch
(
    @BranchId INT
)
RETURNS TABLE
AS
RETURN
(
    SELECT BranchId, AVG(Age) AS AverageAge
    FROM Employee
    WHERE BranchId = @BranchId
    GROUP BY BranchId
);

I need to show branch id, branch name and average employee age for that branch in my query. First I have used CROSS apply in my query.

SELECT b.BranchId, BranchName, e.AverageAge
From Branch b CROSS APPLY dbo.GetAverageEmployeeAgeByBranch(BranchId) as e
ORDER BY BranchId

Result as follows:

Then I have used OUTER APPLY:

SELECT b.BranchId, BranchName, e.AverageAge
From Branch b OUTER APPLY dbo.GetAverageEmployeeAgeByBranch(BranchId) as e
ORDER BY BranchId

Result as follows:

As you can see in the first result set, SQL server only brings branches where there are employees on them (i.e. inner join), but on second result set it has bring all branches, where it cannot find employee age, it has put null on them (like LEFT OUTER join).

You can also use APPLY operator with table value expressions (i.e. sub queries).

SELECT b.BranchId, b.BranchName, e.EmployeeName
FROM Branch b CROSS APPLY 
(SELECT e.BranchId, 
STRING_AGG(EmployeeName, ';') WITHIN GROUP (ORDER BY e.EmployeeName) As EmployeeName 
FROM Employee e WHERE e.BranchId = b.BranchId GROUP BY e.BranchId) as e

Above query brings concatenated employee names for a given branch:





03/06/2024

Check() Constraint in SQL Server

CHECK() Constraint in SQL server is a data validation technique. Using CHECK() constraints, we can minimise data issues that could cause by bad application code or by incorrect queries by user at the database level.

Consider a table like below:

CREATE TABLE CourseSchedule (

    courseId int NOT NULL,

    course_year int NOT NULL,

    course_start_date date,

    course_end_date date,

    PRIMARY KEY (courseId, course_year) )


Let's say we need users (or applications) to insert year as 4 digit year and year should be value between 2000 and 3000. Then we can define a check constraint like below.


CREATE TABLE CourseSchedule (

    courseId int NOT NULL,

    course_year int NOT NULL,

    course_start_date date,

    course_end_date date,

    PRIMARY KEY (courseId, course_year),

    CHECK(course_year >= 2000 and course_year <= 3000))


You can give a name to the constraint. This is really helpful when debugging issue. So you know which table and which column it happened.

E.g.

CREATE TABLE CourseSchedule (

    courseId int NOT NULL,

    course_year int NOT NULL,

    course_start_date date,

    course_end_date date,

    PRIMARY KEY (courseId, course_year),

    CONSTRAINT my_constraint1 CHECK(course_year >= 2000 and course_year <= 3000))


Check constraint can be declared in the line of the column, if it only applied to that column

E.g.

CREATE TABLE CourseSchedule (

    courseId int NOT NULL,

    course_year int NOT NULL CHECK(course_year >= 2000 AND course_year <= 3000)

    course_start_date date,

    course_end_date date,

    PRIMARY KEY (courseId, course_year))


But if it reference multiple columns, it has to be on its own line.

E.g.

CREATE TABLE CourseSchedule (

    courseId int NOT NULL,

    course_year int NOT NULL,

    course_start_date date,

    course_end_date date,

    PRIMARY KEY (courseId, course_year),

    CONSTRAINT my_constraint2 CHECK(course_start_date <= course_end_date))


If you want to drop a check constraint:

ALTER TABLE CourseSchedule 

DROP CONSTRAINT my_constraint2


02/06/2024

Temporary Stored Procedures

I have worked with stored procedures in SQL Server for more than decade now, but only now I got to know the existence of temporary stored procedures. Weird!

Temporary stored procedures works exactly the same way as temporary tables in SQL server.

So their name starts with # for local temporary stored procedures and ## for global temporary stored procedures.

E.g. #thisIsMyLocalTempSp

        ##thisIsMyGlobalTempSp

Same as temporary tables, local temporary stored procedures only available in current session and global temporary stored procedures is available to all sessions, but dropped when session that created it closes.

That make me think, will SQL server allow temporary functions and views too? Unfortunately (or fortunately) my research shows - No.

Note that temporary stored procedures are created on tempdb, so all the issues with tempdb will arise with them (just like temp tables).

So what is the use of temporary stored procedures? Main advantage is I think code re-use. For example if you have a very long stored proc, that uses piece of code over and over again (but you really don't want to create it as a permanent stored proc), you can create a temp stored proc and use it within permanent stored procedure.


29/04/2024

Query Hints -> OPTIMIZE VS RECOMPILE

I have known query hint OPTION(RECOMPILE) for while and most I've heard was it is bad and should be used with care.

As per my understanding when OPTION(RECOMPILE) is used within a stored procedure, plan for that stored procedure get compiled every time it runs (instead of caching the plan in plan cache for future use). Compiling plan is known to be very CPU expensive as it involves many steps, specially for large complex queries. So we should avoid it as much as possible. Using OPTION(RECOMPILE) is a trade off between plan compiling CPU time vs using bad/mediocre plan for set of values.

Recently I came across another related query hint - OPTION(OPTIMIZE FOR ...)

OPTIMIZE FOR hint can be used two ways.

OPTIMIZE FOR (@myparam1 = 1, @myparam2 = 2) -> i.e. specify set of parameters and values to optimize for. So SQL server compile and cache the plan that uses value 1 and 2 for above two parameters. It doesn't use what value came first at run time (like it normally does). By changing the values run time you can have multiple plans cached for same stored procedure.

Other approach is OPTIMIZE FOR UNKNOWN -> In this case SQL server tries to get best plan for each parameter set passed in, BUT it might decided in some cases compiling for new plan will cost me lot so I will use something similar I already have in cache. In other words it try to do the trade off and try to come up with plan that cost same every time. This great article by Kendra Little on BrentOzar website explains it much more details.

Again, as we come across regularly, in performance tuning world of SQL Server, there is nothing fixed, you always need to try and and see what is best for each case.

P.S. I'm not a SQL server expert (yet), I'm still learning, so please be kind enough to comment if something wrong. These notes are my own reference mostly.

21/01/2024

Spools In SQL Server

Best and most simple explanation I could find on web for Spool is - "Spool is a physical operator that shows up in the execution plan. Query optimizer uses spools when it thinks that it's better to put data in the temp table rather than hitting a table again and again using seeks and scans" - from this Q&A.

Query optimizer reads and calculate computed data and put it on a worktable in tempDb when it thinks reading data from original table is too much cost (cost based optimization).

There are several types of spool operators in a execution plan:

* Table Spools -> Physical Operator

* Index Spools -> Physical Operator

* Row Count Spools -> Physical Operator

* Window Spools -> Logical/Physical Operator

* Eager Spools -> Logical Operator

* Lazy Spools -> Logical Operator

Table Spool Read a table and create a work table that only exists for the life time of the query. 

Index Spool - Reads a table(s) and create a work table and also create a non-clustered index on it. Exists only for the life time of the query.

Row Count Spool - Reads a table and returns count or rows, used when need to check existence of rows rather than data.

Window Spool - Expand each row into set of window associate with it (window functions)

Eager Spool - Scans entire input (table) and stores, even when ask for first row (eager operation)

Lazy Spool - Build the work table in lazy manner, i.e. row is stored when parent operator ask for it.

I used following articles to extract above for my knowledge and recommend you to read in deep if you like more details:

* https://msbiskills.com/2015/09/02/types-of-spools-in-execution-plan-deep-dive/ -> by Pawan Khowal

* https://learn.microsoft.com/en-us/answers/questions/463552/execution-plan-for-table-spool


15/01/2024

How to get Input Tree on SSMS output on SQL Server

This is my first blog for Year 2024!

Recently I have the opportunity to go through "Query Optimizer Deep Drive" article series by Paul White.

It was one of the very good and soundly technical series on Query Optimization. However, thing got my most attention was Query Optimizer's input tree and how we can get it output on SSMS output window.

On SQL Server's Optimizer Pipe line, first component is "Input Tree" (Bound Logical Tree). Input Tree make it easier for optimizer to understand the query better. For example, different developers can write same query in different way. But if we can get it break down to logical steps and standardize it will be muck easier for next phases in Optimizer.

By reading and understanding the Input tree or logical steps, we can also understand how query will going to perform on SQL server and hence help to optimize the query.

So how do we make it appear on output?

As per above mentioned article we can use undocumented trace flag 8605 for this. If you want tree output to appear on SSMS output window use the trace flag 3604.

Here is my query to test this. It is a simple contact table with First name and Surname.

SELECT c.CONTACT_FirstNames, c.CONTACT_Surname FROM CONTACT c

OPTION 

(

    RECOMPILE, 

    QUERYTRACEON 3604,

    QUERYTRACEON 8605

);

And then you can see the following output on the SSMS output window.

*** Converted Tree: ***

    LogOp_Project QCOL: [c].CONTACT_FirstNames QCOL: [c].CONTACT_Surname

        LogOp_Get TBL: CONTACT(alias TBL: c) CONTACT TableID=1668825653 TableReferenceID=0 IsRow: COL: IsBaseRow1000 

        AncOp_PrjList 

*******************

(52624 rows affected)

Completion time: 2024-01-15T09:27:30.1054801+00:00


19/11/2023

Scan vs Seek in SQL Server

Recently I was able to go through Brent Ozar's "How to think like the SQL Server" free fundamental course and have to say content is very good.

Actually I have listen to the same course on YouTube several times. But this is the first time I have gone through it properly with Pen and Paper on my hand.

Learned few interesting things and one of the thing struck me most is on ever popular topic on scan and seek on SQL Server.

As per Brent, 

SCAN -> is where SQL Server start reading object (table) from one end. This can be from top/beginning or bottom/end.

SEEK -> is where SQL Server start reading the object (table) from specific point.

Generally when we say table scan, we think SQL server reads the whole table, but in reality it can be different. For example query like below:

SELECT TOP 10 * FROM dbo.Users

This will only read top 10 rows from the table, but it is a scan.

On the other hand seek can read the entire table. For example if we give SQL server to seek for a value which is not in the table, it will seek from the start or end and will go through entire table to find it.

Amazing isn't it.

More details from Brent's site -> https://www.brentozar.com/archive/2019/10/how-to-think-like-the-sql-server-engine-so-index-seeks-are-great-right/


24/10/2023

SARGability in SQL Server

When I heard first about SARGability, it was a funny sound word to me and never could remember the meaning of it.

So I decided to search on it and study a bit and write a note about it, so I can remember it.

In simple terms SARGability means, ability use search predicates (i.e. what is in where clause) to seek through index or a table. High SARGability means, your where clause can be directly use to seek through index or table.

Non-SARGability means, opposite of that, of course.  I.e. these predicate will be hard for SQL server to use in direct seek operation.

Erik Darling, has goo article on this -> https://erikdarling.com/sargability-week-what-is-it-anyway/


05/10/2023

When I SET STATISTICS IO ON

SET statements in TSQL allows user to get more information or change behavior of the query execution in SSMS.

One of the most common SET statement we use is SET STATISTICS ON/OFF.

This shows statistical information about the query information. This is what we get when we turn on STATISTICS IO ON.

Statistic IO Result:

---------------------------

Table '<<Table name>>'. -> Name of the table we read

Scan count 1, -> Number of scans or seeks

logical reads 160,  -> Reads from RAM

physical reads 0, -> Reads from Disk

read-ahead reads 0, -> Number of pages placed into the cache for the query


lob logical reads 0, -> Large objects read from RAM

lob physical reads 0, -> Large objects read from Disk

lob read-ahead reads 0. -> Number of pages for LOB placed into the cache for the query


LOB stands for Large objects, such as ntext, nvarchar, varchar, etc.

LOB data is stored not on the row, but away from the row.

  • If size is < 8000 bytes, stored on row overflow pages
  • If size is > 8000 bytes, stored on LOB data pages



01/10/2023

Using NOEXPAND hint

 NOEXPAND is a TSQL query hint, which we use with indexed views in certain scenarios.

NOEXPAND

Indexed views (or materialized views) are the ones which store underlying data in the database, where normal views generate data for the underlying query on run time.

However, even though indexed view store actual data for the view, there is a change SQL server still use the underlying query directly to generate the data when view is used in a query.

Under some circumstances, this can cause poor performances. So to improve performances, you can force SQL server to use underlying data on the query by using this query hint.


Why I cannot use alias name in where clause

 


Ever wonder why when alias columns in "SELECT" clause of a TSQL code, you cannot use them in some other clauses but not on others?

Until I know the truth I was also confused and frustrated.

This is caused by how SQL server query parsing engine works. It process key words/clauses in a particular order.

  1. FROM
    1. Cross JOIN
    2. ON
  2. WHERE
  3. GROUP BY
    1. CUBE/ROLLUP
  4. HAVING
  5. SELECT
    1. Expressions, aggregates and alias
    2. DISTINCT
    3. TOP
  6. ORDER BY
    1. OFFSET / FETCH
As you can see from above order, alias is pretty down the order, just before ORDERY BY clause. So you can really use alias in ORDER BY clause only.

Kind of disappointing isn't it?

Using Own API keys in Various IDEs

AI hype is so high these days, every one want best AI models for least cost. Though they don't cost much individually, when you add up c...