31/03/2026

IP Ban Utility by Digital Ruby

Recently we commissioned a new CMS server for one of our client, although it mostly locked down, we noticed lot of failed login attempts in Windows Event Viewer.

If you go to Event Viewer -> Windows Logs > Security log and then filter for event id 4625 (Logon), you will be able to see login event (both success and failed). Audit failure events shows failed login attempts.


Relate to that 4740 (User Account Management) shows if above failed login attempts caused windows to lock down your account (in case hacker identified one of your account).


So how to prevent this. Most appropriate thing is to remove it from from public access. But server like CMS server need expose to internet. In that case you can lock down RDP and other access methods and just open web traffic. However, there are instances where servers need to be open for public traffic/access, specially if thirdparty is working on them (continuously).

In that case, you can restrict access to known IP addresses, or use VPNs.

If every method is un-available to you, I found out there is another option. That is banning IPs that un-authorized login attempts are coming. You can do this in your firewall.

But hardest thing is hackers change their IPs randomly, when you block one IP they come from another.

I found this elegant peice of software developed by Digital Ruby software house. It is called IP Ban. Process is simple. It monitor the event log for failed login attempts and if it exceed number you specified (from a particular IP), then it add a firewall rule to stop traffic from that IP address.

IPBan is available free on Github -> https://github.com/digitalruby/ipban

You can download it from here -> https://github.com/DigitalRuby/IPBan/releases

More information about developer can be found here -> https://www.digitalruby.com/server-software/

If you need pro version, this is the place to buy -> https://ipban.com/products

IPBan works on both Windows and Linux servers.

Installation is easy (run following in PowerShell):

$ProgressPreference = 'SilentlyContinue'; [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; iex "& { $((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/DigitalRuby/IPBan/master/IPBanCore/Windows/Scripts/install_latest.ps1')) } -startupType 'delayed-auto'"

It is getting installed as Windows Service (IPBan). You can find the program in default insallation location "C:\Program Files\IPBan".

There is a xml config file on installation directory, where you can change many settings. But mostly default settings works. 

One of the interesting settings is "Whitelist" setting, which allows you to specify comma seperated list of IPs to avoid banning. Make sure to set this for one of your sever, so in case something goes wrong you can log on from there.

It adds Firewall rules prefixed with "IPBan_" so it is easy to identify. You can change the prefix if you prefer something else.

Usually ban is held for 1 day, but you can change this setting.




21/03/2026

SQL Server SESSION_CONTEXT

Last week I came across intresting challenge. I was enhancing an auditing framework for application. This application used SQL triggers for auditing. Basically all tables in database (excluding some system table), had auditing fields such as created date, created by, last modified date and last modified by. Each table where it wanted to audit, had triggers to write values of those field everytime row is updated. Audited data was written to big audit log table. Basically everytime row is updated (or created) above audit fields were updated by app, then trigger write the values of those auditting fields to audit log. But issue was, how to track who deleted a row?

When you delete a row, application cannot write to those auditing fields. I mean you can write, but that will be just stupid to update each row just before it get deleted. That will increase writes just to make it auditable (plus extra audits). So I had to find a solution.

My research showed one of the ways to tackle is SQL Server Session Context.



It is a simple concept introduced in SQL Server 2016. Session context is array of key-value pairs attached to a session.

So in your session, you can specify session specific meta data in this array and read it from SQL server to make different dicissions based on the meta data specified for connection. It is like dictionary attached to your connection.

Session context is stored on memory, therefore, it is fast. It is scoped for session and therefore, isolated from others.

So how to make it work?

First you need to set the values in the context:

EXEC sp_set_session_context 

    @key = N'UserId', 

    @value = 123;

Then you can use those values through out your session:

SELECT SESSION_CONTEXT(N'UserId');

If you don't want to make it updatable, you can set the read-only flag:

EXEC sp_set_session_context 

    @key = N'UserId', 

    @value = 123,

    @read_only = 1;


So how did it help my situation:

Well, just before the delete, I could set the "UserId" or "OperatingUserId" like variable in the session context. Then on the trigger, I can read this value and create the audit log for delete with correct user id.

You will say 'Well, it is still a additional write, just before delete?'. 

Yes and No. It is a update to connection, which is held in small part of the memory, so it is not going to write to disk. In fact there is a limitation on session context. Session context only allow maximum 256 key-value pairs, but total size need to be under 1MB (approx).

But be aware it is not a place to hold password like secret information, because if multiple people share the connection (like through application), can see the whole context.

In a connection, where session is shared (like through multi user application). It is vital that you set the userId like values just before where it is going to be use. Otherwise you will be using wrong values, because other users may also set it. There might be concurrency issue, but in my case, rows are deleted in-frequently, so it was ok.

In addition to auditing, you can use it to identify tenant when you application is multi tenant. And it can also be used with row level security (cuatiosly).

Don't over use it, just use it sparegly, 

There are know issues, where session context provide wrong results when queries go parallel. But in my case, delete is always single threaded.



18/03/2026

Agent Types in VS Codes

Recently I was working on VS code lot. In fact I was working with Github Copilot lot on VS Code. However have to admit I'm not aware of full capabilities of agentic framework available on VS Code through Github Copilot. I'm just learning as I go.

One of the options you see on Github Copilot window is Agent Type.


There are few options available in this:

  • Local
  • Background
  • Cloud
  • Claude (thirdparty)

I wanted to find out what are the different capabilities of each of the Agent types. Here is my find.

Local

Of course this is the default agent which I'm using most of the time. In fact all the time so far. 

Local agent runs within your VS code environment in your local machine. There fore it has access to all the resouces in VS Code, such as your workspace, files, context (stack trace, unit test results, liniting errors, etc.). 

It can use all models available in VS code. Also have access to all agent tools (browsing, MCP and extension provided tools).

Mostly, it is interactive, you can chat with it and give real time feedback and you can steer them the direction you want easily if you see them going off track.

That's why local agents are the ones I use most.

Background

Background agent, also known as Copilot CLI sessions, run in background. Main feature of this is they run even you close down VSCode. These are well suited for long running tasks, such as you have defined and planned all the steps need to take and where you can sure agent can run independantly long time without your input.

When these sessions require attention, they will notify in chat window same way the local agent would do.

There are two isolation levels for these:

  • Worktree -> create seperate GIT work tree and work independantly to your main code.
  • Workspace -> workspace is where background agent also works in same environment as what you see in VSCode
There are some limitations with background agents. They cannot access all tools in VSCode, plus they don't have access to extension provided tools. Also they are limited to CLI provided models.

Background agent can be start using agent type drop down:


Then you can selection isolation in isolation drop down:


You can also specify a working folder, so agent can independantly work without interfearance:



Cloud Agent

Cloud agents as name suggest runs on cloud (or remote infrastructure). The main cloud agent is Github Copilot cloud agent, which runs on Githug infrastructure (i.e. your remote github repository).

Github Copilot cloud agent as integrated access to Git hub repositories, there for it can do most of the actions (if not all) that any github user can do. Those include large scale refactoring, complete feature development, automatic pull request creation and code reviews.

You can also have thirdparty cloud agents, such as Claude Code and Open AI's Codex.

You can select cloud agent from agent type drop down:



Then you can select repository and give instruction to the agent.

SQL Server Quirky Check Constraints

Recently at work, I found out curious table structure. I found out in one of the table, primary key had foreign key constraint. Well you wil...