Tales From A Lazy Fat DBA

Its all about Databases, their performance, troubleshooting & much more …. ¯\_(ツ)_/¯

When Linux Swaps Away My Sleep – MySQL, RHEL8, and the Curious Case of High Swap Usage

Posted by FatDBA on December 12, 2025

I remember an old instance where I’d got an alert that one of production MySQL servers had suddenly gone sluggish after moved to RHEL 8 from RHEL7. On checking, I found something odd … the system was consuming swap heavily, even though there was plenty of physical memory free.

Someone who did the first time deployment years before, left THP as enabled and with default swapiness … but this setting that had worked perfectly for years on RHEL 7, but now, after the upgrade to RHEL 8.10, the behavior was completely different.

This post is about how that small OS level change turned into a real performance headache, and what we found after some deep digging.

The server in question was a MySQL 8.0.43 instance running on a VMware VM with 16 CPUs and 64 GB RAM. When the issue began, users complained that the database was freezing randomly, and monitoring tools were throwing high load average and slow query alerts.

Let’s take a quick look at the environment … It was a pretty decent VM, nothing under sized.

$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.10 (Ootpa)

$ uname -r
4.18.0-553.82.1.el8_10.x86_64

$ uptime
11:20:24 up 3 days, 10:57,  2 users,  load average: 4.34, 3.15, 3.63

$ grep ^CPU\(s\) sos_commands/processor/lscpu
CPU(s): 16

When I pulled the SAR data for that morning, the pattern was clear ..There were long stretches on CPU where %iowait spiked above 20-25%, and load averages crossed 400+ during peak time! The 09:50 slot looked particularly suspicious .. load average jumped to 464 and remained high for several minutes.

09:00:01 %usr=26.08  %iowait=22.78  %idle=46.67
09:40:01 %usr=29.04  %iowait=24.43  %idle=40.11
09:50:01 %usr=7.55   %iowait=10.07  %idle=80.26
10:00:01 %usr=38.53  %iowait=19.54  %idle=35.32

Here’s what the memory and swap stats looked like:

# Memory Utilization
%memused ≈ 99.3%
Free memory ≈ 400 MB (on a 64 GB box)
Swap usage ≈ 85% average, hit 100% at 09:50 AM

That was confusing.. MySQL was not leaking memory, and there was still >10 GB available for cache and buffers. The system was clearly pushing pages to swap even though it didn’t need to. That was the turning point in the investigation.

At the same time, the reporting agent started reporting MySQL timeouts:

 09:44:09 [mysql] read tcp xxx.xx.xx.xx:xxx->xxx.xxx.xx.xx:xxxx: i/o timeout
 09:44:14 [mysql] read tcp xx.xx.xx.xxxx:xxx->xx.xx.xx.xx.xx:xxx: i/o timeout

And the system kernel logs showed the familiar horror lines for every DBA .. MySQL threads were being stalled by the OS. This aligned perfectly with the time when swap usage peaked.

 09:45:34 kernel: INFO: task mysqld:5352 blocked for more than 120 seconds.
 09:45:34 kernel: INFO: task ib_pg_flush_co:9435 blocked for more than 120 seconds.
 09:45:34 kernel: INFO: task connection:10137 blocked for more than 120 seconds.

I double-checked the swappiness configuration:

$ cat /proc/sys/vm/swappiness
1

So theoretically, swap usage should have been minimal. But the system was still paging aggressively. Then I checked the cgroup configuration (a trick I learned from a Red Hat note) .. And there it was more than 115 cgroups still using the default value of 60! … In RHEL 8, memory management moved more toward cgroup v2, which isolates memory parameters by control group.

So even if /proc/sys/vm/swappiness is set to 1, processes inside those cgroups can still follow their own default value (60) and this explained why the system was behaving like swappiness=60 even though the global value was 1.

$ find /sys/fs/cgroup/memory/ -name *swappiness -exec cat {} \; | uniq -c
      1 1
    115 60

In RHEL 8, memory management moved more toward cgroup v2, which isolates memory parameters by control group. So even if /proc/sys/vm/swappiness is set to 1, processes inside those cgroups can still follow their own default value (60). This explained why the system was behaving like swappiness=60 even though the global value was 1.

Once the root cause was identified, the fix was straightforward — Enforced global swapiness across CGroups

Add this to /etc/sysctl.conf:

vm.force_cgroup_v2_swappiness = 1

Then reload:
sysctl -p

This forces the kernel to apply the global swappiness value to all cgroups, ensuring consistent behavior. Next, we handled THP that is always expected to cause intermittent fragmentation and stalls in memory intensive workloads like MySQL, Oracle, PostgreSQL and even in non RDBMSs like Cassandra etc., we disabled the transparent huge pages and rebooted the host.

In short what happened and was the root cause.

  • RHEL8 introduced a change in how swappiness interacts with cgroups.
  • The old /proc/sys/vm/swappiness setting no longer applies universally.
  • Unless explicitly forced, MySQL’s cgroup keeps the default swappiness (60).
  • Combined with THP and background I/O, this created severe page cache churn.

So the OS upgrade, not MySQL, was the real root cause.

Note: https://access.redhat.com/solutions/6785021

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

How Oracle 23ai Fixes Bad Plans Instantly.. A Live Demo of Real-Time SPM

Posted by FatDBA on November 23, 2025

Recently, I did a quick round of testing in my lab with Oracle Database 23ai, focusing on its new SQL Plan Management (SPM) enhancements and the results were impressive.

In 23ai, SPM can actually detect and fix bad execution plans in real time. It monitors a running query, compares the performance of the newly generated plan against the best ones it has seen before (tracked in the Automatic SQL Tuning Set), and if the new plan performs worse, it automatically switches back to the proven baseline.

No evolve task. No manual verification. No DBA intervention.
This feature Real-Time SQL Plan Management was tested on an Oracle 23ai lab build (pre-release environment). While 23ai itself isn’t generally available on-prem (its successor Oracle 26ai now carries these capabilities), the behavior shown here is identical in both 23ai and 26ai, and is already live and enabled by default in Oracle Autonomous Database.

Let’s see it live!!!

SET ECHO ON FEEDBACK ON HEADING ON LINES 300 PAGES 500 TIMING ON
SET SERVEROUTPUT ON
SET LONG 2000000 LONGCHUNKSIZE 2000000 TRIMSPOOL ON

PROMPT === Create tables
CREATE TABLE fact_sales (
  sales_id     NUMBER GENERATED BY DEFAULT AS IDENTITY,
  prod_id      NUMBER NOT NULL,
  cust_id      NUMBER NOT NULL,
  date_id      DATE   NOT NULL,
  channel_id   NUMBER NOT NULL,
  qty          NUMBER NOT NULL,
  amount       NUMBER NOT NULL
);

CREATE TABLE dim_product  ( prod_id NUMBER PRIMARY KEY, category VARCHAR2(40), brand VARCHAR2(40) );
CREATE TABLE dim_customer ( cust_id NUMBER PRIMARY KEY, region   VARCHAR2(30), segment VARCHAR2(30) );
CREATE TABLE dim_channel  ( channel_id NUMBER PRIMARY KEY, channel_name VARCHAR2(30) );

Table created.
Table created.
Table created.
Table created.



PROMPT === Load dimensions
DECLARE
BEGIN
  FOR i IN 1..100000 LOOP
    INSERT INTO dim_product VALUES(i, CASE WHEN MOD(i,10)=0 THEN 'Premium' ELSE 'Standard' END, 'Brand_'||MOD(i,200));
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_product rows inserted: '||TO_CHAR(100000));

  FOR i IN 1..500000 LOOP
    INSERT INTO dim_customer VALUES(i,
      CASE MOD(i,5) WHEN 0 THEN 'NA' WHEN 1 THEN 'EU' WHEN 2 THEN 'APAC' WHEN 3 THEN 'ME' ELSE 'LATAM' END,
      CASE MOD(i,4) WHEN 0 THEN 'ENT' WHEN 1 THEN 'SMB' WHEN 2 THEN 'MID' ELSE 'CONSUMER' END);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_customer rows inserted: '||TO_CHAR(500000));

  FOR i IN 1..6 LOOP
    INSERT INTO dim_channel VALUES(i, 'CH_'||i);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('dim_channel rows inserted: 6');

  COMMIT;
  DBMS_OUTPUT.PUT_LINE('Commit complete.');
END;
/
SHOW ERRORS

dim_product rows inserted: 100000
dim_customer rows inserted: 500000
dim_channel rows inserted: 6
Commit complete.
PL/SQL procedure successfully completed.





PROMPT === Load fact 
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 0..59 LOOP  -- 60 days x 80k = ~4.8M
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/
SHOW ERRORS

fact_sales rows inserted:  4,800,000
PL/SQL procedure successfully completed.






PROMPT === Indexes + Stats
CREATE INDEX fs_prod    ON fact_sales(prod_id);
CREATE INDEX fs_cust    ON fact_sales(cust_id);
CREATE INDEX fs_date    ON fact_sales(date_id);
CREATE INDEX fs_channel ON fact_sales(channel_id);

Index created.
Index created.
Index created.
Index created.




EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_PRODUCT',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CUSTOMER',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'DIM_CHANNEL',CASCADE=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'FACT_SALES',CASCADE=>TRUE);

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

In this first stage of my testing, I have created a mini data warehouse environment inside Oracle 23ai to simulate a realistic analytical workload. I built three dimension tables (DIM_PRODUCT, DIM_CUSTOMER, DIM_CHANNEL) and one large fact table (FACT_SALES) containing roughly 4.8 million rows of transactional data spread across 60 business days.

After loading the data, I have collected fresh optimizer statistics on all tables and their indexes to ensure that the Oracle Optimizer starts with a clear and accurate view of data distribution before I begin the query and plan testing phase. This baseline setup is essential because, in the next steps, I’ll demonstrate how Real-Time SQL Plan Management (SPM) reacts when execution plans change both for the better and for the worse.


-- What date range do we have?
SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 01-MAR-22



-- Rows per month (quick sanity check)
SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;
SQL> 

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03      80000

3 rows selected.



-- Lets addd the remaining days so Apr to Jun exists (60..180 --> Mar 2 .. Jun 30)
DECLARE
  v_total NUMBER := 0;
  v_date  DATE   := DATE '2022-01-01';
BEGIN
  FOR d IN 60..180 LOOP
    INSERT /*+ APPEND */ INTO fact_sales (prod_id,cust_id,date_id,channel_id,qty,amount)
    SELECT MOD(ABS(DBMS_RANDOM.RANDOM),100000)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),500000)+1,
           v_date + d,
           MOD(ABS(DBMS_RANDOM.RANDOM),6)+1,
           MOD(ABS(DBMS_RANDOM.RANDOM),10)+1,
           ROUND(DBMS_RANDOM.VALUE(5,500),2)
    FROM dual CONNECT BY LEVEL <= 80000;
    v_total := v_total + SQL%ROWCOUNT;
    COMMIT;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('Additional fact_sales rows inserted: '||TO_CHAR(v_total,'999,999,999'));
END;
/

SELECT MIN(date_id) AS min_dt, MAX(date_id) AS max_dt
FROM   fact_sales;

MIN_DT    MAX_DT
--------- ---------
01-JAN-22 30-JUN-22


SELECT TO_CHAR(date_id,'YYYY-MM') month_yyyy_mm, COUNT(*) cnt
FROM   fact_sales
GROUP  BY TO_CHAR(date_id,'YYYY-MM')
ORDER  BY 1;

MONTH_Y        CNT
------- ----------
2022-01    2480000
2022-02    2240000
2022-03    2480000
2022-04    2400000
2022-05    2480000
2022-06    2400000







VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- Optional: flush shared pool to force a fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;

-- Run the query
SELECT p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id = f.prod_id
JOIN   dim_customer c ON c.cust_id = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4

2 rows selected.

Elapsed: 00:00:21.33


-- 0) (optional but recommended) collect row-source stats
ALTER SESSION SET statistics_level = ALL;

-- 1) run the query with a gather hint (guarantees ALLSTATS LAST works)
VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)
EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

ALTER SYSTEM FLUSH SHARED_POOL;

SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category,
       c.region,
       SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4




-- get the SQL_ID & CHILD_NUMBER of the query you just ran
COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
2t8da8zxfzp50            0      1151134654 24-NOV-25


SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id        => '2t8da8zxfzp50',
         cursor_child_no => 0,            
         format        => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));  

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  2t8da8zxfzp50, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category,
c.region,        SUM(f.amount) AS rev FROM   fact_sales f JOIN
dim_product  p ON p.prod_id  = f.prod_id JOIN   dim_customer c ON
c.cust_id  = f.cust_id JOIN   dim_channel  ch ON ch.channel_id =
f.channel_id WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                 AND TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category =
'Premium'   AND  c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 1151134654

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |      1 |        |       |      2 |00:00:05.79 |   59198 |       |       |          |
|   1 |  HASH GROUP BY                          |              |      1 |      2 |   102 |      2 |00:00:05.79 |   59198 |  1200K|  1200K|  654K (0)|
|*  2 |   HASH JOIN                             |              |      1 |  36018 |  1793K|    145K|00:00:05.75 |   59198 |    23M|  4013K|   26M (0)|
|*  3 |    HASH JOIN                            |              |      1 |  36018 |  1477K|    363K|00:00:05.26 |   57865 |  1995K|  1995K| 3065K (0)|
|   4 |     VIEW                                | VW_GBF_17    |      1 |  50000 |   390K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  5 |      FILTER                             |              |      1 |        |       |  10000 |00:00:00.01 |     396 |       |       |          |
|*  6 |       TABLE ACCESS FULL                 | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |       |       |          |
|*  7 |     HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.33 |   57469 |  1922K|  1922K| 1411K (0)|
|*  8 |      TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |       |       |          |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.70 |   57467 |       |       |          |
|* 10 |       INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:00.97 |   17336 |       |       |          |
|* 11 |    TABLE ACCESS FULL                    | DIM_CUSTOMER |      1 |    200K|  1757K|    200K|00:00:00.07 |    1333 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$B734A6F8
   4 - SEL$FB6C8AF9 / VW_GBF_17@SEL$393E5B1C
   5 - SEL$FB6C8AF9
   6 - SEL$FB6C8AF9 / P@SEL$1
   8 - SEL$B734A6F8 / CH@SEL$3
   9 - SEL$B734A6F8 / F@SEL$1
  10 - SEL$B734A6F8 / F@SEL$1
  11 - SEL$B734A6F8 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$FB6C8AF9")
      ELIM_GROUPBY(@"SEL$2B4A2561")
      OUTLINE_LEAF(@"SEL$B734A6F8")
      PLACE_GROUP_BY(@"SEL$EE94F965" ( "P"@"SEL$1" ) 17)
      OUTLINE(@"SEL$2B4A2561")
      ELIM_GROUPBY(@"SEL$85CD04AE")
      OUTLINE(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$85CD04AE")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$393E5B1C")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$B734A6F8" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$B734A6F8" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$B734A6F8" "F"@"SEL$1")
      NO_ACCESS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      FULL(@"SEL$B734A6F8" "C"@"SEL$2")
      LEADING(@"SEL$B734A6F8" "CH"@"SEL$3" "F"@"SEL$1" "VW_GBF_17"@"SEL$393E5B1C" "C"@"SEL$2")
      USE_HASH(@"SEL$B734A6F8" "F"@"SEL$1")
      USE_HASH(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH(@"SEL$B734A6F8" "C"@"SEL$2")
      SWAP_JOIN_INPUTS(@"SEL$B734A6F8" "VW_GBF_17"@"SEL$393E5B1C")
      USE_HASH_AGGREGATION(@"SEL$B734A6F8")
      FULL(@"SEL$FB6C8AF9" "P"@"SEL$1")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C"."CUST_ID"="F"."CUST_ID")
   3 - access("ITEM_1"="F"."PROD_ID")
   5 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   6 - filter("P"."CATEGORY"='Premium')
   7 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   8 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
  10 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  11 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT"*"ITEM_2")[22]
   2 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "ITEM_2"[NUMBER,2], "F"."AMOUNT"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   4 - (rowset=256) "ITEM_1"[NUMBER,22], "ITEM_2"[NUMBER,2]
   5 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   6 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   7 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22],
       "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   8 - "CH"."CHANNEL_ID"[NUMBER,22]
   9 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
  10 - "F".ROWID[ROWID,10]
  11 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - this is an adaptive plan


122 rows selected.



Here I just set the reporting window using bind variables and flushed the shared pool to make sure Oracle parses the query fresh. The optimizer picked a clean and efficient plan using an index range scan on FS_DATE to grab only the required time slice from FACT_SALES, then performing hash joins to the small dimension tables and a hash group by at the end.

All the filters from the WHERE clause show up correctly in the predicates, and the stats (about 95k buffers, ~3 seconds) confirm it’s a solid in-memory run. The note at the bottom clearly says SQL plan baseline accepted, which means this is now my “good plan” the one Real-Time SPM will lock onto and automatically revert to if a slower plan shows up later.

VAR p_from VARCHAR2(10)
VAR p_to   VARCHAR2(10)

EXEC :p_from := '2022-04-01';
EXEC :p_to   := '2022-06-30';

-- 1) regress stats on DIM_CUSTOMER
EXEC DBMS_STATS.DELETE_TABLE_STATS(USER,'DIM_CUSTOMER');

PL/SQL procedure successfully completed.


-- 2) force fresh parse
ALTER SYSTEM FLUSH SHARED_POOL;
System altered.


-- 3) run the query (convert binds to DATEs)
SELECT /*+ GATHER_PLAN_STATISTICS */
       p.category, c.region, SUM(f.amount) AS rev
FROM   fact_sales f
JOIN   dim_product  p ON p.prod_id  = f.prod_id
JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id
WHERE  f.date_id BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')
                     AND TO_DATE(:p_to  ,'YYYY-MM-DD')
  AND  p.category = 'Premium'
  AND  c.region   IN ('NA','EU')
  AND  ch.channel_name IN ('CH_1','CH_2','CH_3')
GROUP  BY p.category, c.region;

CATEGOR REGION                                REV
------- ------------------------------ ----------
Premium EU                             18253962.2
Premium NA                             18316364.4



SQL> COLUMN sql_id FORMAT A13
SELECT sql_id, child_number, plan_hash_value, last_active_time
FROM   v$sql
WHERE  sql_text LIKE 'SELECT /*+ GATHER_PLAN_STATISTICS */%SUM(f.amount)%GROUP  BY p.category, c.region%'
  AND  parsing_schema_name = USER
ORDER  BY last_active_time DESC
FETCH FIRST 1 ROW ONLY;SQL>   

SQL_ID        CHILD_NUMBER PLAN_HASH_VALUE LAST_ACTI
------------- ------------ --------------- ---------
7v44tqagjad7k            0      2441995847 24-NOV-25


SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(
         sql_id => '7v44tqagjad7k', 
         cursor_child_no => 0,
         format => 'ALLSTATS LAST +PEEKED_BINDS +PREDICATE +OUTLINE +ALIAS +NOTE +PROJECTION +BYTES +IOSTATS'
));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7v44tqagjad7k, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */        p.category, c.region,
SUM(f.amount) AS rev FROM   fact_sales f JOIN   dim_product  p ON
p.prod_id  = f.prod_id JOIN   dim_customer c ON c.cust_id  = f.cust_id
JOIN   dim_channel  ch ON ch.channel_id = f.channel_id WHERE  f.date_id
BETWEEN TO_DATE(:p_from,'YYYY-MM-DD')                      AND
TO_DATE(:p_to  ,'YYYY-MM-DD')   AND  p.category = 'Premium'   AND
c.region   IN ('NA','EU')   AND  ch.channel_name IN
('CH_1','CH_2','CH_3') GROUP  BY p.category, c.region

Plan hash value: 2441995847

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name         | Starts | E-Rows |E-Bytes| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |              |      1 |        |       |      2 |00:00:09.07 |   59199 |  14384 |  14384 |       |       |          |         |
|   1 |  HASH GROUP BY                           |              |      1 |  36018 |  2743K|      2 |00:00:09.07 |   59199 |  14384 |  14384 |  1200K|  1200K|  661K (0)|         |
|*  2 |   FILTER                                 |              |      1 |        |       |    145K|00:00:09.04 |   59199 |  14384 |  14384 |       |       |          |         |
|*  3 |    HASH JOIN                             |              |      1 |  36018 |  2743K|    145K|00:00:09.02 |   59199 |  14384 |  14384 |  2402K|  2402K| 3124K (0)|         |
|*  4 |     TABLE ACCESS FULL                    | DIM_PRODUCT  |      1 |  50000 |   683K|  10000 |00:00:00.01 |     396 |      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                            |              |      1 |  40000 |  2500K|   1454K|00:00:08.67 |   58803 |  14384 |  14384 |   234M|    14M|   40M (1)|     116M|
|*  6 |      HASH JOIN                           |              |      1 |  40000 |  1328K|   3638K|00:00:04.60 |   57469 |      0 |      0 |  1922K|  1922K| 1348K (0)|         |
|*  7 |       TABLE ACCESS FULL                  | DIM_CHANNEL  |      1 |      3 |    24 |      3 |00:00:00.01 |       2 |      0 |      0 |       |       |          |         |
|   8 |       TABLE ACCESS BY INDEX ROWID BATCHED| FACT_SALES   |      1 |  80000 |  2031K|   7280K|00:00:02.83 |   57467 |      0 |      0 |       |       |          |         |
|*  9 |        INDEX RANGE SCAN                  | FS_DATE      |      1 |  80000 |       |   7280K|00:00:01.04 |   17336 |      0 |      0 |       |       |          |         |
|* 10 |      TABLE ACCESS FULL                   | DIM_CUSTOMER |      1 |    212K|  6229K|    200K|00:00:00.05 |    1334 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$EE94F965
   4 - SEL$EE94F965 / P@SEL$1
   7 - SEL$EE94F965 / CH@SEL$3
   8 - SEL$EE94F965 / F@SEL$1
   9 - SEL$EE94F965 / F@SEL$1
  10 - SEL$EE94F965 / C@SEL$2

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$EE94F965")
      MERGE(@"SEL$9E43CB6E" >"SEL$4")
      OUTLINE(@"SEL$4")
      OUTLINE(@"SEL$9E43CB6E")
      MERGE(@"SEL$58A6D7F6" >"SEL$3")
      OUTLINE(@"SEL$3")
      OUTLINE(@"SEL$58A6D7F6")
      MERGE(@"SEL$1" >"SEL$2")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$1")
      FULL(@"SEL$EE94F965" "CH"@"SEL$3")
      INDEX_RS_ASC(@"SEL$EE94F965" "F"@"SEL$1" ("FACT_SALES"."DATE_ID"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$EE94F965" "F"@"SEL$1")
      FULL(@"SEL$EE94F965" "C"@"SEL$2")
      FULL(@"SEL$EE94F965" "P"@"SEL$1")
      LEADING(@"SEL$EE94F965" "CH"@"SEL$3" "F"@"SEL$1" "C"@"SEL$2" "P"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "F"@"SEL$1")
      USE_HASH(@"SEL$EE94F965" "C"@"SEL$2")
      USE_HASH(@"SEL$EE94F965" "P"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$EE94F965" "P"@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$EE94F965")
      END_OUTLINE_DATA
  */

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): '2022-04-01'
   2 - :2 (VARCHAR2(30), CSID=873): '2022-06-30'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_DATE(:P_TO,'YYYY-MM-DD')>=TO_DATE(:P_FROM,'YYYY-MM-DD'))
   3 - access("P"."PROD_ID"="F"."PROD_ID")
   4 - filter("P"."CATEGORY"='Premium')
   5 - access("C"."CUST_ID"="F"."CUST_ID")
   6 - access("CH"."CHANNEL_ID"="F"."CHANNEL_ID")
   7 - filter(("CH"."CHANNEL_NAME"='CH_1' OR "CH"."CHANNEL_NAME"='CH_2' OR "CH"."CHANNEL_NAME"='CH_3'))
   9 - access("F"."DATE_ID">=TO_DATE(:P_FROM,'YYYY-MM-DD') AND "F"."DATE_ID"<=TO_DATE(:P_TO,'YYYY-MM-DD'))
  10 - filter(("C"."REGION"='EU' OR "C"."REGION"='NA'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "C"."REGION"[VARCHAR2,30], SUM("F"."AMOUNT")[22]
   2 - "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   3 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30], "C"."REGION"[VARCHAR2,30]
   4 - (rowset=256) "P"."PROD_ID"[NUMBER,22]
   5 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22], "C"."REGION"[VARCHAR2,30],
       "C"."REGION"[VARCHAR2,30]
   6 - (#keys=1) "F"."AMOUNT"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   7 - "CH"."CHANNEL_ID"[NUMBER,22]
   8 - "F"."PROD_ID"[NUMBER,22], "F"."CUST_ID"[NUMBER,22], "F"."CHANNEL_ID"[NUMBER,22], "F"."AMOUNT"[NUMBER,22]
   9 - "F".ROWID[ROWID,10]
  10 - "C"."CUST_ID"[NUMBER,22], "C"."REGION"[VARCHAR2,30]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan


110 rows selected.







-- Real-Time SPM evidence
-- Real-Time SPM Evidence
ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
ALTER SESSION SET optimizer_use_sql_plan_baselines     = TRUE;

COLUMN last_verified            FORMAT A19
COLUMN foreground_last_verified FORMAT A19
COLUMN accepted                 FORMAT A8
COLUMN enabled                  FORMAT A8
COLUMN origin                   FORMAT A20
COLUMN sql_handle               FORMAT A20
COLUMN plan_name                FORMAT A25

SELECT sql_handle,
       plan_name,
       origin,
       accepted,
       enabled,
       TO_CHAR(last_verified,'YYYY-MM-DD HH24:MI:SS')            AS last_verified,
       TO_CHAR(foreground_last_verified,'YYYY-MM-DD HH24:MI:SS') AS fg_verified
FROM   dba_sql_plan_baselines
WHERE  sql_text LIKE '%SUM(f.amount)%'
ORDER  BY last_verified DESC;

SQL_HANDLE           PLAN_NAME                 ORIGIN               ACCEPTED ENABLED LAST_VERIFIED        FG_VERIFIED
-------------------- ------------------------- -------------------- -------- ------- -------------------- --------------------
SQL_bf3d9f8a42       SQL_PLAN_2t8da8zxfzp50    AUTO-CAPTURE         YES      YES     2025-11-24 14:06:19  2025-11-24 14:06:19
SQL_bf3d9f8a42       SQL_PLAN_7v44tqagjad7k    FOREGROUND-CAPTURE   NO       NO      2025-11-24 14:06:18  2025-11-24 14:06:18

-- SQL_PLAN_2t8da8zxfzp50 ---> the good baseline plan, automatically captured, accepted and enabled by Real-Time SPM.

-- SQL_PLAN_7v44tqagjad7k ----> the regressed plan, captured during foreground execution but rejected (ACCEPTED=NO) as it underperformed compared to the verified baseline.




In the good baseline plan (SQL_ID 2t8da8zxfzp50, plan hash 1151134654), the optimizer uses an FS_DATE index range scan with batched ROWID fetches from FACT_SALES, then hash-joins (F ↔ VW_GBF_17(P) ↔ C) with a tiny full scan of CH (3 rows) and finishes with a hash group by. It’s fully in-memory about 59,198 buffers, ~5.79s elapsed, 0 reads, 0 temp with predicates correctly slicing DATE_ID; this is the efficient baseline. After deleting stats on DIM_CUSTOMER, the regressed plan (SQL_ID 7v44tqagjad7k, plan hash 2441995847) keeps the same general shape and the FS_DATE index range scan, but row-source sizing changes due to misestimates; it still hash-joins and groups, yet now needs ~59,199 buffers, ~9.07s, ~14,384 reads, and ~116MB temp (dynamic sampling level=2), i.e., the same outline but degraded cardinality drives extra I/O and temp usage aabout a 56% slowdown …

The numbers tell the story: Oracle tried the new plan once, learned it was bad, and reverted. This isn’t magic or marketing. It’s just Oracle 23ai doing what every good DBA does watching, measuring, and reverting fast when things go south. Real-Time SPM uses the same cost and runtime data we always trusted from AWR, but now the optimizer reacts within seconds instead of waiting for us.

On large production systems, it won’t prevent every performance issue, but it dramatically reduces the window where a regression hurts. It’s safe to enable, requires no extra license beyond Enterprise Edition, and it works right out of the box in Autonomous DB.

In Short

  • 23ai on-prem Enterprise Edition and Autonomous DB both support Real-Time SPM.
  • It compares runtime metrics from the Automatic SQL Tuning Set.
  • When a new plan runs slower, Oracle reinstates the previous good baseline automatically.
  • You’ll see it immediately in DBA_SQL_PLAN_BASELINES with ORIGIN = 'FOREGROUND-CAPTURE'.
  • You can watch it happen live through execution plans and elapsed times.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | 1 Comment »

From Painful Manual LOB Shrink to Automatic SecureFiles Shrink

Posted by FatDBA on November 15, 2025

I’ve been working with LOBs for years now, and trust me, shrinking them has always been a headache. Anyone who has ever tried ALTER TABLE … SHRINK SPACE on a big SecureFiles LOB knows the pain … blocking sessions, unexpected waits, and sometimes that lovely ORA-1555 popping up at the worst time. Every DBA eventually gets into that situation where a LOB segment is 200 GB on disk but only 10 GB of real data remains. You delete rows… but the space never comes back unless you manually shrink it, which itself can cause more issues.

With the introduction of Automatic SecureFiles Shrink, Oracle really made a DBA’s life easier. This feature, which first came out in 23ai, quietly frees up unused LOB space in the background without disrupting your workload. I wanted to see how it behaves in a real scenario, so I set up a small lab and tested it out. Here’s the whole experiment, raw and simple.

Lets do a demo and understand how this new feature works … I spun up a fresh PDB and made a small tablespace just for this test. Nothing fancy.

CREATE TABLESPACE lobts
  DATAFILE '/u02/oradata/LOBTS01.dbf'
  SIZE 1G AUTOEXTEND ON NEXT 256M;

Tablespace created.



-- Table with a securefile LOB based column.
CREATE TABLE tst_securefile_lob
(
    id NUMBER,
    lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE (
    TABLESPACE lobts
    CACHE
);

Table created.




SELECT table_name, column_name, securefile
FROM   user_lobs
WHERE  table_name='TST_SECUREFILE_LOB';

TABLE_NAME           COLUMN_NAME   SECUREFILE
-------------------  ------------  ----------
TST_SECUREFILE_LOB   LOB_DATA      YES


Next, I inserted a good amount of junk data around 10,000 rows of random CLOB strings. I wanted the LOB segment to be big enough to see clear differences after shrink.

BEGIN
  FOR r IN 1 .. 10 LOOP
    INSERT INTO tst_securefile_lob (id, lob_data)
    SELECT r*100000 + level,
           TO_CLOB(DBMS_RANDOM.STRING('x', 32767))
    FROM dual
    CONNECT BY level <= 1000;
    COMMIT;
  END LOOP;
END;
/

PL/SQL procedure successfully completed.



SELECT COUNT(*) FROM tst_securefile_lob;

  COUNT(*)
----------
     10000




SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE  ul.table_name = 'TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$     131072     1024.00




-- After stats and a quick check in USER_SEGMENTS, the LOB segment was showing a nice chunky size. 
-- Then I deleted almost everything
-- Now the table will have very few rows left, but the LOB segment was still the same size. As usual.
DELETE FROM tst_securefile_lob
WHERE id < 900000;
COMMIT;

9990 rows deleted. 
Commit complete.


-- Checking LOB Internal Usage (Before Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 131072  bytes=1073741824
Used blocks         =  10240  bytes=83886080
Expired blocks      = 110592  bytes=905969664
Unexpired blocks    =  10240  bytes=83886080
-- This clearly shows almost the entire segment is expired/free but not reclaimed.


-- Checking Auto Shrink Statistics (Before Enabling)
SELECT name, value
FROM   v$sysstat
WHERE  name LIKE '%Auto SF SHK%'
ORDER  BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               0
Auto SF SHK successful                       0
Auto SF SHK total number of tasks            0

Turning on Automatic Shrink — By default this feature is OFF, so I enabled it:

EXEC DBMS_SPACE.SECUREFILE_SHRINK_ENABLE;

PL/SQL procedure successfully completed.

That’s literally it. No parameters, no tuning, nothing else. Just enable.

Automatic SecureFiles Shrink does not run immediately after you delete data. Oracle requires that a SecureFiles LOB segment be idle for a specific amount of time before it becomes eligible for shrinking, and the default idle-time limit is 1,440 minutes (24 hours). “Idle” means that no DML or preallocation activity has occurred on that LOB during that period. Once the segment meets this condition, Oracle considers it during its automatic background shrink task, which is part of AutoTask and runs every 30 minutes with a defined processing window.

When the task executes, it attempts to shrink eligible segments, but it does so gradually and in small increments using a trickle-based approach, rather than reclaiming all possible space in a single operation. This incremental behavior is deliberate: it reduces impact on running workloads and avoids heavy reorganization all at once. Only segments that meet all selection criteria .. such as having sufficient free space above the defined thresholds and not using RETENTION MAX … are processed. Because of this incremental design and the eligibility rules, the full space reclamation process can span multiple background cycles.

Some of the internal hidden/underscore params that can be used to control these limits (not unless support asked you to do or you are dealing a lab system)

Parameter                                    Default_Value    Session_Value    Instance_Value   IS_SESSION_MODIFIABLE   IS_SYSTEM_MODIFIABLE
------------------------------------------- ---------------  ---------------- ---------------- ----------------------- ---------------------
_ktsls_autoshrink_seg_idle_seconds            86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_seg_pen_seconds             86400            86400            86400            FALSE                   IMMEDIATE
_ktsls_autoshrink_trickle_mb                  5                5                5                FALSE                   IMMEDIATE

Okay, lets check out post chnaghe outputs, what automatic LOB shrink does to our test object.

SELECT ul.segment_name,
       us.blocks,
       ROUND(us.bytes/1024/1024,2) AS mb
FROM   user_lobs ul
JOIN   user_segments us
  ON us.segment_name = ul.segment_name
WHERE ul.table_name='TST_SECUREFILE_LOB';

SEGMENT_NAME               BLOCKS     MB
------------------------   --------   --------
SYS_LOB0001234567C00002$      40960      320.00

Note: From ~1024 MB down to ~320 MB. Auto shrink worked     🙂



-- DBMS_SPACE Usage (After Auto Shrink)
EXEC show_securefile_space(USER, 'SYS_LOB0001234567C00002$');
Segment blocks      = 40960  bytes=335544320
Used blocks         =  9216  bytes=75497472
Expired blocks      =  6144  bytes=50331648
Unexpired blocks    =  9216  bytes=75497472

Notee:  Expired blocks dropped from 110k -->  6k. This confirms auto shrink freed most of the fragmented space.






-- After the task is run.
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%Auto SF SHK%'
ORDER BY name;

NAME                                     VALUE
--------------------------------------   ------
Auto SF SHK failures                         0
Auto SF SHK segments processed               1
Auto SF SHK successful                       1
Auto SF SHK total number of tasks            1




SELECT owner,
       segment_name,
       shrunk_bytes,
       attempts,
       last_shrink_time
FROM v$securefile_shrink;

OWNER      SEGMENT_NAME               SHRUNK_BYTES    ATTEMPTS   LAST_SHRINK_TIME
---------  ------------------------   -------------   ---------  ---------------------------
PRASHANT   SYS_LOB0001234567C00002$      744947712          1     14-NOV-2025 06:32:15

Note:  Oracle automatically reclaimed ~710 MB of wasted LOB space.

This feature, It’s simple, it’s safe, and it saves DBAs from doing manual shrink maintenance again and again. It’s not a fast feature it’s slow and polite on purpose but it works exactly as expected.

If your system has LOBs (EBS attachments, documents, JSON, logs, media files, etc.), you should absolutely enable this. Let Oracle handle the boring part.

Hope It Helped!
Prashant Dixit

Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

A New ACE Chapter…Oracle ACE Apprentice.. and I wasn’t even in Vegas

Posted by FatDBA on October 18, 2025

This year I couldn’t attend Oracle AI World in Las Vegas, and I was honestly feeling a bit bad about missing it, especially when timelines were full of shiny posts, selfies, and of course swag pics 😄

But while going through the updates, I came across something that actually made me happier than any conference badge … Oracle officially launched a new ACE tier called “Oracle ACE Apprentice”, and this is a proper new program, not the old structure.

The idea behind the Apprentice tier is simple but powerful, give people who are learning, sharing, speaking, or contributing (even in small ways) a proper entry ramp into the ACE ecosystem. It’s for early contributors who want to grow into ACE Pro / ACE Director later, but need an official “start of journey” recognition & guidance.

Earlier, people used to wait until they became a heavy community contributor before applying. Now Oracle has created a pathway instead of a gate which I personally think is the most modern and inclusive update they’ve made to the ACE program.

How can someone join?

  • There is a simple pre-registration form for ACE Apprentice
  • You need to use the same email as your Oracle University (certification) account
  • After joining, you continue contributing — blogs, talks, code samples, community help etc.
  • And when your impact grows, you can later nominate yourself for ACE Pro

Very clean and very beginner-friendly. Honestly, this is going to bring a lot more new talent into the ecosystem.

For more details: https://ace.oracle.com/ords/r/ace/oracle-aces/join-ace

I have been into the Ace club for a while now and it’s really nice to see the program evolve with this new energy, especially at the same time Oracle is pushing the AI era forward …. So yeah .. I missed Vegas this year… but I still ended up with good ACE news in the most unexpected way 😎

Looks like next year I’ll need to be there in person again, this time wearing two badges: ACE Pro + ACE Program Fanboy 😂

Hope It Helped!
Prashant Dixit

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

DBMS_ILM HTML Dashboard Script .. The Missing Monitor for Oracle Archiving

Posted by FatDBA on September 15, 2025

Somneone on LinkedIn recently asked me if its possible tocollect all environment info, licensing signals (Heat Map / ADO / Advanced Compression), ILM policies, jobs, results, evaluation details, Heat Map status, partition placement and sizes, archive tablespace usage, and to have a “since last run” growth delta by snapshotting partition sizes via any reports. Here is something that I use… This script creates lightweight repo tables on first run and reuses them on subsequent runs to compute deltas and presents all results in a neat HTML report.

Paste this into a .sql file and run it with SQL*Plus or SQLcl. No lists, just code and the HTML it produces.

-- ilm_monitor_report.sql
-- HTML dashboard for DBMS_ILM monitoring with partition growth deltas.
-- Run as a DBA account with access to DBA_* views.

-- ===== User-configurable parameters =====
define REPORT_DIR    = '/u01/app/oracle/admin/reports'
define REPORT_NAME   = 'ilm_report_&&_DATESTAMP..html'
define OWNER_LIKE    = '%PSFT%'                -- filter objects by owner (e.g. 'PSFT' or '%')
define TABLE_LIKE    = '%'                    -- filter table name pattern (e.g. 'PAY_CHECKS%' or '%')
define ARCHIVE_TS    = 'ARCHIVE_TS'           -- archive tablespace name to highlight
define DAYS_BACK     = 30                     -- window for AWR/feature usage context
-- =======================================

column _DATESTAMP new_value _DATESTAMP
select to_char(sysdate,'YYYYMMDD_HH24MISS') as _DATESTAMP from dual;

set termout on pages 0 echo off feedback off verify off trimspool on lines 500 long 100000 longchunksize 100000
set markup html on spool on entmap off preformat off
spool &&REPORT_DIR/&&REPORT_NAME

prompt <style>
prompt body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; margin: 18px; }
prompt h1, h2 { margin-top: 24px; }
prompt table { border-collapse: collapse; width: 100%; margin: 12px 0 24px 0; }
prompt th, td { border: 1px solid #ddd; padding: 6px 8px; vertical-align: top; }
prompt th { background: #f6f8fa; text-align: left; }
prompt .ok { color: #137333; font-weight: 600; }
prompt .warn { color: #b26a00; font-weight: 600; }
prompt .bad { color: #c5221f; font-weight: 600; }
prompt .muted { color: #666; }
prompt .pill { padding: 2px 8px; border-radius: 12px; font-size: 12px; background:#eef2ff; }
prompt .archive { background:#fff7e6; }
prompt </style>

prompt <h1>DBMS_ILM Monitoring & Capacity Report</h1>
prompt <div class="muted">Generated: &&_DATESTAMP</div>

prompt <h2>Environment</h2>
WITH env AS (
  SELECT (SELECT name FROM v$database) db_name,
         (SELECT dbid FROM v$database) dbid,
         (SELECT instance_name FROM v$instance) inst,
         (SELECT host_name FROM v$instance) host,
         (SELECT version FROM v$instance) version,
         SYSDATE now_ts
  FROM dual
)
SELECT * FROM env;

prompt <h2>Heat Map & ADO Status</h2>
SELECT parameter_name, parameter_value
FROM   dba_heat_map_parameters
ORDER  BY parameter_name;

prompt <h2>Feature Usage Signals (Licensing awareness)</h2>
SELECT name,
       detected_usages,
       currently_used,
       TO_CHAR(last_usage_date,'YYYY-MM-DD HH24:MI') last_used
FROM   dba_feature_usage_statistics
WHERE  name IN (
  'Heat Map',
  'Automatic Data Optimization',
  'Advanced Compression',
  'Hybrid Columnar Compression'
)
ORDER BY name;

prompt <h2>Objects Under ILM Management (filter: owner like ''&&OWNER_LIKE'' and table like ''&&TABLE_LIKE'')</h2>
SELECT owner, object_name, object_type, ilm_level, enabled
FROM   dba_ilmobjects
WHERE  owner LIKE '&&OWNER_LIKE'
AND    object_name LIKE '&&TABLE_LIKE'
ORDER BY owner, object_name;

prompt <h2>ILM Policies</h2>
SELECT owner,
       object_name,
       policy_name,
       action,
       scope,
       condition,
       enabled,
       TO_CHAR(created,'YYYY-MM-DD HH24:MI') created
FROM   dba_ilm_policies
WHERE  owner LIKE '&&OWNER_LIKE'
AND    object_name LIKE '&&TABLE_LIKE'
ORDER  BY owner, object_name, policy_name;

prompt <h2>ILM Evaluation Details (why policies fired or not)</h2>
SELECT owner, object_name, policy_name,
       evaluation_result,
       evaluation_reason,
       TO_CHAR(evaluation_time,'YYYY-MM-DD HH24:MI') evaluation_time
FROM   dba_ilmevaluationdetails
WHERE  owner LIKE '&&OWNER_LIKE'
AND    object_name LIKE '&&TABLE_LIKE'
ORDER  BY evaluation_time DESC;

prompt <h2>ILM Jobs (background executions)</h2>
SELECT job_id,
       owner,
       object_name,
       policy_name,
       status,
       TO_CHAR(start_time,'YYYY-MM-DD HH24:MI') start_time,
       TO_CHAR(end_time,'YYYY-MM-DD HH24:MI')   end_time,
       message
FROM   dba_ilmjobs
WHERE  owner LIKE '&&OWNER_LIKE'
AND    object_name LIKE '&&TABLE_LIKE'
ORDER  BY start_time DESC;

prompt <h2>ILM Results</h2>
SELECT job_id,
       owner,
       object_name,
       policy_name,
       action,
       result,
       TO_CHAR(action_time,'YYYY-MM-DD HH24:MI') action_time,
       result_details
FROM   dba_ilmresults
WHERE  owner LIKE '&&OWNER_LIKE'
AND    object_name LIKE '&&TABLE_LIKE'
ORDER  BY action_time DESC;

prompt <h2>Heat Map Activity Snapshot (segment-level)</h2>
SELECT owner,
       segment_name,
       segment_type,
       access_count,
       TO_CHAR(last_access,'YYYY-MM-DD HH24:MI') last_access,
       TO_CHAR(last_modification,'YYYY-MM-DD HH24:MI') last_modification
FROM   dba_heat_map_segment
WHERE  owner LIKE '&&OWNER_LIKE'
AND    segment_name LIKE '&&TABLE_LIKE'
ORDER  BY NVL(last_access, last_modification) DESC NULLS LAST;

prompt <h2>Partition Placement & Size (highlighting ARCHIVE tablespace)</h2>
WITH p AS (
  SELECT p.table_owner owner,
         p.table_name,
         p.partition_name,
         p.tablespace_name,
         NVL(s.bytes,0) bytes
  FROM   dba_tab_partitions p
  LEFT   JOIN dba_segments s
         ON s.owner = p.table_owner
        AND s.segment_name = p.table_name
        AND s.partition_name = p.partition_name
        AND s.segment_type LIKE 'TABLE PARTITION%'
  WHERE  p.table_owner LIKE '&&OWNER_LIKE'
  AND    p.table_name  LIKE '&&TABLE_LIKE'
)
SELECT owner,
       table_name,
       partition_name,
       CASE WHEN tablespace_name = '&&ARCHIVE_TS' THEN
         '<span class="pill archive">'||tablespace_name||'</span>'
       ELSE tablespace_name END as tablespace_name,
       ROUND(bytes/1024/1024) AS mb
FROM   p
ORDER  BY owner, table_name, partition_name;

prompt <h2>Archive Tablespace Usage</h2>
WITH t AS (
  SELECT tablespace_name,
         SUM(bytes)/1024/1024 mb_alloc
  FROM   dba_data_files
  GROUP  BY tablespace_name
), f AS (
  SELECT tablespace_name,
         SUM(bytes)/1024/1024 mb_free
  FROM   dba_free_space
  GROUP  BY tablespace_name
)
SELECT t.tablespace_name,
       ROUND(t.mb_alloc,0) AS mb_alloc,
       ROUND(NVL(f.mb_free,0),0) AS mb_free,
       ROUND(t.mb_alloc - NVL(f.mb_free,0),0) AS mb_used,
       CASE WHEN t.tablespace_name = '&&ARCHIVE_TS'
            THEN '<span class="pill archive">archive target</span>'
            ELSE '<span class="muted">—</span>' END as note
FROM   t LEFT JOIN f USING (tablespace_name)
ORDER  BY t.tablespace_name;

prompt <h2>Partition Size Snapshot & Growth Since Last Run</h2>
BEGIN
  EXECUTE IMMEDIATE q'[
    CREATE TABLE ILM_MON_SNAP_PARTS (
      snap_ts         DATE DEFAULT SYSDATE NOT NULL,
      owner           VARCHAR2(128),
      table_name      VARCHAR2(128),
      partition_name  VARCHAR2(128),
      tablespace_name VARCHAR2(128),
      bytes           NUMBER
    )
  ]';
EXCEPTION WHEN OTHERS THEN
  IF SQLCODE != -955 THEN RAISE; END IF;
END;
/

INSERT /*+ append */ INTO ILM_MON_SNAP_PARTS (snap_ts, owner, table_name, partition_name, tablespace_name, bytes)
SELECT SYSDATE,
       p.table_owner,
       p.table_name,
       p.partition_name,
       p.tablespace_name,
       NVL(s.bytes,0)
FROM   dba_tab_partitions p
LEFT   JOIN dba_segments s
       ON s.owner = p.table_owner
      AND s.segment_name = p.table_name
      AND s.partition_name = p.partition_name
      AND s.segment_type LIKE 'TABLE PARTITION%'
WHERE  p.table_owner LIKE '&&OWNER_LIKE'
AND    p.table_name  LIKE '&&TABLE_LIKE';

COMMIT;

WITH latest AS (
  SELECT owner, table_name, partition_name, tablespace_name, bytes,
         snap_ts,
         ROW_NUMBER() OVER (PARTITION BY owner, table_name, partition_name ORDER BY snap_ts DESC) rn
  FROM   ILM_MON_SNAP_PARTS
),
curr AS (
  SELECT * FROM latest WHERE rn = 1
),
prev AS (
  SELECT l1.owner, l1.table_name, l1.partition_name, l1.tablespace_name, l1.bytes bytes_prev, l1.snap_ts snap_prev
  FROM   latest l1
  WHERE  rn = 2
)
SELECT c.owner,
       c.table_name,
       c.partition_name,
       c.tablespace_name,
       ROUND(c.bytes/1024/1024) AS mb_now,
       ROUND(NVL(p.bytes_prev,0)/1024/1024) AS mb_prev,
       ROUND((c.bytes - NVL(p.bytes_prev,0))/1024/1024) AS mb_delta,
       TO_CHAR(NVL(p.snap_prev, c.snap_ts),'YYYY-MM-DD HH24:MI') AS compared_to
FROM   curr c
LEFT   JOIN prev p
       ON p.owner = c.owner
      AND p.table_name = c.table_name
      AND p.partition_name = c.partition_name
ORDER  BY c.owner, c.table_name, c.partition_name;

prompt <h2>Quick Health Flags</h2>
WITH jobs AS (
  SELECT status FROM dba_ilmjobs WHERE start_time > SYSDATE - 7
),
flags AS (
  SELECT
    (SELECT CASE WHEN COUNT(*)=0 THEN 'No recent ILM jobs in last 7 days'
                 WHEN SUM(CASE WHEN status='SUCCEEDED' THEN 1 ELSE 0 END)=0 THEN 'ILM jobs exist but none succeeded'
                 ELSE 'Recent ILM jobs succeeded' END
       FROM jobs) AS job_health,
    (SELECT CASE WHEN (SELECT COUNT(*) FROM dba_ilm_policies WHERE owner LIKE '&&OWNER_LIKE' AND object_name LIKE '&&TABLE_LIKE')=0
                 THEN 'No ILM policies found for current filter'
                 ELSE 'ILM policies present' END
       FROM dual) AS policy_presence,
    (SELECT CASE WHEN (SELECT parameter_value FROM dba_heat_map_parameters WHERE parameter_name='HEAT_MAP') IN ('ON','AUTO')
                 THEN 'Heat Map enabled'
                 ELSE 'Heat Map disabled' END
       FROM dual) AS heat_map_state
  FROM dual
)
SELECT CASE WHEN job_health LIKE 'Recent%' THEN '<span class="ok">'||job_health||'</span>'
            WHEN job_health LIKE 'No recent%' THEN '<span class="warn">'||job_health||'</span>'
            ELSE '<span class="bad">'||job_health||'</span>' END AS job_health,
       CASE WHEN policy_presence LIKE 'ILM policies present' THEN '<span class="ok">'||policy_presence||'</span>'
            ELSE '<span class="warn">'||policy_presence||'</span>' END AS policy_presence,
       CASE WHEN heat_map_state = 'Heat Map enabled' THEN '<span class="ok">'||heat_map_state||'</span>'
            ELSE '<span class="bad">'||heat_map_state||'</span>' END AS heat_map_state
FROM   flags;

prompt <h2>Notes</h2>
prompt <p class="muted">
This report is filtered by owner like '<b>&&OWNER_LIKE</b>' and table like '<b>&&TABLE_LIKE</b>'.
Relocation of partitions using DBMS_ILM MOVE is included with Enterprise Edition. Any compression shown by feature-usage signals
may require Advanced Compression Option; Hybrid Columnar Compression is limited to engineered systems. Verify with your Oracle rep.
</p>

spool off
set markup html off
prompt Report written to &&REPORT_DIR/&&REPORT_NAME

Below is the sample report that was executed on one of my local test database where I had ILM configured.

This approach keeps everything lightweight and DBA-friendly. The first run creates a small snapshot table ILM_MON_SNAP_PARTS. Each subsequent run captures current partition sizes and shows the delta since the prior run, so you can literally see ILM moves and compression effects reflected in size changes and archive tablespace growth.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

Stop Writing Purge Scripts. DBMS_ILM Is the Hidden Hero You Ignored

Posted by FatDBA on September 14, 2025

Enterprise databases accumulate vast amounts of historical data over time journal lines in Oracle E-Business Suite (EBS), payroll checks in PeopleSoft HCM, closed service requests in CRM, or audit logs in custom ERP systems. Left unchecked, this data creates storage bloat, longer backup windows, degraded query performance, and costly migrations. Traditionally, DBAs wrote custom purge scripts to handle “cold” data. But Oracle introduced a smarter, automated way: DBMS_ILM. With DBMS_ILM and the broader Information Lifecycle Management (ILM) framework, you can define policies that automatically compress, move, or archive data as it ages turning Oracle into a self-managing database.

Historical Timeline

  • Oracle 11g (2007–2012)
    • ILM was more of a concept than a feature.
    • DBMS_ILM package existed but was very limited, focusing mainly on manual compression/move operations.
  • Oracle 12c Release 1 (2013)
    • Heat Map introduced (tracks row/segment usage).
    • Automatic Data Optimization (ADO) became available.
    • DBMS_ILM matured to support policy-driven movement and compression.
    • Licensing Note: Compression features required Advanced Compression Option (ACO).
  • Oracle 12c Release 2 (2016)
    • Enhanced policy management, subpartition-level actions.
    • Support for Hybrid Columnar Compression (HCC) in engineered systems.
  • Oracle 18c / 19c (2018–2019)
    • DBMS_ILM + ADO stabilized as enterprise-ready.
    • Background ILM jobs integrated with DBMS_SCHEDULER.
    • Adopted widely in ERP/CRM archiving projects.
  • Oracle 21c / 23c (2021–2023)
    • Minor refinements but ILM model remained stable.
    • Greater focus on integration with Exadata and Cloud tiering.
    • In 23ai (23c), DBMS_ILM is still the core API for lifecycle policies, with Oracle pushing more cloud-managed automation in Autonomous DB.

Licensing is often misunderstood, so let’s break it down clearly:

FeatureOracle VersionLicense Requirement
Heat Map (track row/segment usage)12c+Enterprise Edition (included)
Basic ILM actions (MOVE partitions, mark READ ONLY)12c+Enterprise Edition (included)
Compression policies (OLTP, Advanced Row Compression)12c+Enterprise Edition + Advanced Compression Option (ACO)
Hybrid Columnar Compression (HCC)11g+Exadata, ZFS, or ACFS + ACO
Automatic Tiering to external storage12c+ILM Pack / Engineered Systems
Autonomous DB ILM policies19c+Included in Autonomous service pricing

Golden Rule:

  • Moving cold partitions to an archive tablespace = FREE (EE)
  • Compressing cold partitions = PAID (ACO)
  • Using Exadata HCC or tiering = Requires Engineered System licensing

Always validate with:

SELECT name, detected_usages, currently_used, last_usage_date
FROM   dba_feature_usage_statistics
WHERE  name LIKE '%Compression%' OR name LIKE '%Heat Map%' OR name LIKE '%ILM%';

This query helps confirm whether your test triggered paid options.

DBMS_ILM Components

  • Heat Map
    • Tracks data access (reads/writes).
    • Provides metrics for ILM decisions.
  • Automatic Data Optimization (ADO)
    • Executes policies when conditions are met.
    • Works on segments, partitions, or rows.
  • DBMS_ILM Package
    • API to create, enable, execute, and monitor ILM policies.
    • Provides manual overrides for testing.

Use Cases (ERP/CRM/PeopleSoft)

  1. PeopleSoft Payroll (PAY_CHECKS)
    • Compress after 2 years.
    • Move to ARCHIVE tablespace after 7 years.
  2. Oracle EBS GL (GL_JE_LINES)
    • Move journals > 10 years old.
    • Keep recent 3 years online.
  3. CRM Tickets
    • Close/compress tickets > 5 years.
    • Keep only 2 years active for customer service SLAs.
  4. Custom Banking ERP (TXN_AUDIT_LOG)
    • Auto-compress audit logs after 36 months.
    • Mark READ ONLY after 7 years.

Detailed Demo

6.1. Enable Heat Map

ALTER SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

6.2. Create Archive Tablespace

CREATE TABLESPACE ARCHIVE_TS 
DATAFILE '/u01/app/oracle/oradata/ARCHIVE01.dbf' SIZE 20G AUTOEXTEND ON;

6.3. Create Partitioned Table

CREATE TABLE psft.pay_checks
(
  check_id    NUMBER,
  emp_id      NUMBER,
  amount      NUMBER,
  check_date  DATE
)
PARTITION BY RANGE (check_date)
(
  PARTITION p2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-YYYY')),
  PARTITION p2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-YYYY')),
  PARTITION p2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-YYYY')),
  PARTITION pmax  VALUES LESS THAN (MAXVALUE)
);

6.4. Insert Data

INSERT INTO psft.pay_checks
SELECT level, mod(level,1000), dbms_random.value(1000,5000),
       ADD_MONTHS(SYSDATE, -level)
FROM dual CONNECT BY level <= 200000;
COMMIT;

6.5. Add Policy

BEGIN
  DBMS_ILM.ADD_POLICY(
    object_owner     => 'PSFT',
    object_name      => 'PAY_CHECKS',
    policy_name      => 'PAYROLL_ARCHIVE_POLICY',
    action           => DBMS_ILM.MOVE,
    scope            => DBMS_ILM.SCOPE_PARTITION,
    tier_tablespace  => 'ARCHIVE_TS',
    condition        => 'CREATION_TIME < ADD_MONTHS(SYSDATE, -84)', -- 7 years
    enabled          => DBMS_ILM.ENABLED
  );
END;
/

6.6. Execute Policy

BEGIN
  DBMS_ILM.EXECUTE_ILM(
    object_owner => 'PSFT',
    object_name  => 'PAY_CHECKS',
    policy_name  => 'PAYROLL_ARCHIVE_POLICY'
  );
END;
/

Monitoring ILM

-- check policiess.
SELECT policy_name, action, condition, enabled
FROM   dba_ilm_policies
WHERE  object_name='PAY_CHECKS';

--Monitor Jobs
SELECT job_id, object_name, status, start_time, end_time
FROM   dba_ilmjobs;

-- check results
SELECT * FROM dba_ilmresults WHERE object_name='PAY_CHECKS';

Verify Partition Location

SELECT table_name, partition_name, tablespace_name
FROM   dba_tab_partitions
WHERE  table_name='PAY_CHECKS';

Heat Map Access Tracking

SELECT segment_name, access_count, last_access
FROM   dba_heat_map_segment
WHERE  segment_name='PAY_CHECKS';

Troubleshooting

IssueLikely CauseFix
Policy doesn’t fireHeat Map offALTER SYSTEM SET HEAT_MAP=ON
Partitions not movedNot partitionedPartition by DATE
ORA-4031 errorsMemory pressureTune ILM job resource usage
Compression errorsMissing ACO licenseRemove compression actions

Best Practices

  1. Always partition large tables by date before using ILM.
  2. Run in DEV and validate with ERP/CRM reports before PROD rollout.
  3. Align with compliance/legal retention rules (SOX, GDPR, HIPAA).
  4. Schedule ILM jobs in maintenance windows.
  5. Monitor dba_feature_usage_statistics to avoid license violations.
  6. Combine ILM with purging strategies for truly obsolete data.

Conclusion

Introduced in Oracle 11g as a concept and matured in 12c, DBMS_ILM is now a stable enterprise feature that brings policy-driven automation to Oracle databases.

  • For PeopleSoft Payroll, it can transparently archive checks older than 7 years.
  • For EBS Financials, it can compress historical journal entries.
  • For CRM logs, it can move cold data to cheaper storage.

When deployed with proper licensing (EE + ACO where needed), DBMS_ILM transforms data management from ad-hoc DBA scripts into a compliant, auditable, and automated lifecycle system.

It is not just a DBA feature… it’s a data governance framework, critical for ERP/CRM modernization and cloud migrations.

If you want to test it, start by enabling Heat Map in your DEV/TEST system, track your largest ERP/CRM tables for 30 days, then design your first MOVE-only ILM policy (no compression). Once comfortable, expand to compression (if licensed) and tiering strategies.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

Early Access, Cool Swag, and the Best Community Ever – The Oracle ACE Story

Posted by FatDBA on August 11, 2025

I was inducted into the Oracle ACE Program back in 2021, and honestly, it has been one of the most rewarding chapters of my career. Over the years, I’ve had the chance to connect directly with Oracle teams, work with passionate community members, and be part of projects that pushed me to learn, share, and grow.

Last year at Vegas, Oracle CloudWorld 2024, I met so many of these like-minded Oracle professionals in person. It was incredible. The hallway conversations, the deep-dive technical sessions, the after-hours chats over coffee — those are moments you simply can’t replicate online. The energy of being surrounded by people who live and breathe Oracle technology was something else entirely.

Sadly, I’ll be missing Oracle CloudWorld this year because of other travel commitments 😦 It’s a bit of a heartbreaker, but it also makes me appreciate just how special these opportunities are when you do get them.

That said, my journey as an Oracle ACE has been about far more than conferences. From early access to technology, to direct collaboration with Oracle’s product teams, to the friendships formed with other ACEs — the benefits go well beyond the badge on my LinkedIn profile … I also have to give a huge shout-out to the Oracle ACE Program team – Jen Nicholson, Oana Bonu, and Cassandra Call — for the constant encouragement, guidance, and for keeping the ACE community so well-connected and engaged. Their support plays a massive role in making the program such a great experience.


So, you have probably seen someone proudly showing off that little red Oracle ACE badge on their LinkedIn profile.
It looks cool, but what does it actually mean? Is it just a bragging sticker or is there something real behind it?

It turns out it is a whole lot more than a badge.
It feels like joining an invite-only club where the currency is knowledge, the meetings happen in Slack channels, and the secret handshake is swapping tuning tips over coffee at a conference.

One of my favorite perks is the Beta Program eligibility. This is where you get to play with Oracle’s new features before the rest of the world even knows they exist. Imagine Oracle telling you, “Here is a new 23ai feature, go ahead and test it, break it, tell us what you think.” You run your tests, push it to the limit, maybe even crash it completely in your lab and then send feedback straight to the people building it. It is like being a database taste tester. Sometimes it is exciting, sometimes it is messy, and sometimes you discover something so odd you are half-expecting them to name the bug after you.

Then there are the five thousand dollars in Oracle Cloud credits. This is one of the most underrated perks. You basically get your own sandbox to build whatever you want. It could be Autonomous Databases, test clusters, proof-of-concepts, machine learning experiments, or even a fully populated database of every coffee type known to humankind just to test JSON search performance. You can go wild without worrying about the cost or your manager asking why the cloud bill suddenly tripled.

And of course, Oracle CloudWorld. As an ACE, you can get passes to attend. This is where it feels like Disneyland for Oracle professionals. You get hands-on labs, deep dives into technology you have never even heard of, and those hallway conversations that somehow end up teaching you more than any formal session. You meet engineers who wrote the features you use every day, swap ideas with other ACEs, and sometimes find yourself in late-night debates about why people still write SELECT * in production code.

Another important benefit is the direct connection to Oracle Product Management. It does not mean they will fix your SR in two minutes, but it does mean you can talk directly to the people who design and own the products. Got a question about hybrid partitioned tables or JSON indexing or Vector Search/Indexing or …. ? You can skip the long escalation chain and ask the folks who built it. It is like having a hotline for database issues, except your contact is probably in a polo shirt and speaks PL/SQL.

There is also the social media boost. If you are blogging, making YouTube tutorials, or speaking at events, Oracle helps amplify your work. They will share your content, feature it in newsletters, and suddenly your small weekend blog post is getting attention from DBAs all over the world. You wake up to dozens of comments and connection requests from people who just read your take on optimizer statistics.

Now, I would be lying if I did not mention the swag. Hoodies, jackets, mugs, backpacks. And not the cheap conference giveaway kind. This is the sort of gear that makes colleagues raise an eyebrow and ask, “Where did you get that?” The ACE hoodie is an instant conversation starter at events. I have met people simply because they spotted it from across the room and came over to talk.

If you speak at conferences, there is also travel support. This can be a game changer, especially for events far from home. If you have ever wanted to present to a large audience but travel costs made you think twice, this can tip the balance and make it possible.

The biggest thing for me though is the community. You are surrounded by people who love this stuff as much as you do. People who understand why you are excited about a new optimizer hint. People who send you screenshots of strange execution plans just for fun. People who will happily spend part of their evening helping you troubleshoot a tricky replication setup because they have been there before. It is not just networking. It is friendship, mentorship, and constant learning all rolled into one.

Being an Oracle ACE comes with many perks such as early access to features, cloud credits, event passes, direct connections to product management, social media amplification, great swag, and travel support. The real magic is in the access, the learning, and the people you meet along the way.

If you enjoy sharing your knowledge, writing, presenting, or helping other Oracle users, keep doing it. One day that email saying “Congratulations, you are an Oracle ACE” might land in your inbox. That is when the real adventure starts.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

MySQL OPTIMIZE TABLE – Disk Space Reclaim Defragmentation and Common Myths

Posted by FatDBA on August 8, 2025

When working with MySQL databases, one common task is reclaiming disk space and defragmenting tables. The typical solution that most of us have turned to is OPTIMIZE TABLE. While this sounds like a simple, quick fix, there are a few myths and things we often overlook that can lead to confusion. Let’s break it down.

The Basics: OPTIMIZE TABLE and ALTER TABLE

To reclaim space or defragment a table in MySQL, the go-to commands are usually:

  • OPTIMIZE TABLE <table_name>; or OPTIMIZE TABLE [table_name_1], [table_name_2] or via sudo mysqlcheck -o [schema] [table] -u [username] -p [password]

But before we dive into the myths, let’s clarify what happens when you run these commands.

OPTIMIZE TABLE Overview

  • OPTIMIZE TABLE is essentially a shorthand for ALTER TABLE <table_name> ENGINE=InnoDB for InnoDB tables. It works by rebuilding the table, compacting data, and reclaiming unused space.
  • In MySQL 5.6.17 and later, the command works online, meaning it allows concurrent reads and writes during the rebuild, with some exceptions (brief locking during initial and final stages). Prior to 5.6.17, the table was locked for the entire duration of the operation, causing application downtime.

Myth #1: OPTIMIZE TABLE Is Always Quick

  • No: OPTIMIZE TABLE can indeed take a long time for large tables, especially if there are a lot of inserts, deletes, or updates. This is true when rebuilding the table. For larger datasets, the I/O load can be significant.
mysql> OPTIMIZE TABLE my_large_table;
+----------------------------+--------+----------+----------+----------+
| Table                      | Op     | Msg_type | Msg_text |
+----------------------------+--------+----------+----------+----------+
| mydb.my_large_table         | optimize | ok       | Table optimized |
+----------------------------+--------+----------+----------+----------+

In the output, the values under each column heading would show:

  • Table: The table that was optimized (e.g., yourdb.customers).
  • Op: The operation performed (optimize).
  • Msg_type: Type of message, usually status.
  • Msg_text: The result of the operation, such as OK or a specific message (e.g., “Table is already up to date”).

If the table is already optimized or doesn’t require optimization, the output might look like this:

+------------------+----------+----------+-----------------------------+
| Table            | Op       | Msg_type | Msg_text                    |
+------------------+----------+----------+-----------------------------+
| yourdb.customers | optimize | note     | Table is already up to date |
+------------------+----------+----------+-----------------------------+

Below screenshot explains possible values of msg_text etc.

Real-Time Example Validation:

  • MySQL logs can show something like this: [Note] InnoDB: Starting online optimize table my_large_table [Note] InnoDB: Table optimized successfully

However, for larger tables, it is critical to consider the additional I/O load during the rebuild. For example:
bash [Note] InnoDB: Rebuilding index my_large_table_idx [Note] InnoDB: Table rebuild completed in 300 seconds

Note: In order to get more detailed information its good to verify PROCESSLIST or SLOW QUERY LOG (if enabled).

Myth #2: OPTIMIZE TABLE Doesn’t Block Other Operations

  • Yes/No: This myth is partly true and partly false depending on the MySQL version.
  • For MySQL 5.5 and earlier: The table is locked for writes, but concurrent reads are allowed.
  • For MySQL 5.6.16 and earlier: Same as above .. concurrent reads are allowed, but writes are blocked.
  • For MySQL 5.6.17 and later: Concurrent reads and writes are allowed during the rebuild process, but the table still needs to be briefly locked during the initial and final phases. There is a brief lock required to start the process, which is often overlooked.

Real-Time Example for MySQL 5.6.17+:

[Note] InnoDB: Starting online optimize table my_large_table
[Note] InnoDB: Table optimized successfully

Although reads and writes are allowed during this process, you might still experience short bursts of lock at the start and end of the operation.

Myth #3: You Don’t Need to Worry About Disk Space

  • No: You need sufficient disk space before running OPTIMIZE TABLE. If you’re running low on space, you could encounter errors or performance issues during the rebuild process.
  • There are few bugs as well which might could occur if disk space is insufficient. Additionally, there’s also temporary disk space required during the rebuild process. Running OPTIMIZE TABLE with insufficient space could fail silently, leading to issues down the line.

Best Practice:
Ensure that your disk has at least as much free space as the table you’re optimizing, as a copy of the table is created temporarily during the rebuild.

Myth #4: ALTER TABLE with Row Format Is Always the Solution

  • No: ALTER TABLE ... ROW_FORMAT=COMPRESSED or other formats can help optimize space, but it may not always result in savings, especially for certain data types (like BLOBs or large text fields). It can also introduce overhead on the CPU if you’re using compression.

In some cases, switching to a compressed format can actually increase the size of the table, depending on the type of data stored.

Real-Time Example:

  • For a table like customer_data: ALTER TABLE customer_data ROW_FORMAT=COMPRESSED; Depending on the types of columns and data (e.g., BLOBs or TEXT), compression might not always yield the expected results.

Myth #5: You Only Need to Optimize Tables When They Get Slow

  • No: This is another common misconception. Regular optimization is crucial to ensure long-term performance, especially for heavily modified tables. Tables that undergo a lot of updates or deletions can become fragmented over time, even without obvious performance degradation.

Optimizing periodically can help prevent gradual performance loss.

Real-Time Example:

  • If you have an orders table: mysql> OPTIMIZE TABLE orders; Over time, especially with frequent UPDATE or DELETE operations, fragmentation can slow down access, even if it’s not immediately noticeable.

Main Pointerss ..

  • OPTIMIZE TABLE is a helpful tool but not a one-size-fits-all solution.
  • It requires sufficient disk space and careful consideration of your MySQL version and storage engine (InnoDB vs. MyISAM).
  • In MySQL 5.6.17 and later, online optimizations are possible, but brief locking still occurs during the process.
  • For MyISAM tables, there’s no escaping the full lock during optimization.
  • Always assess the potential overhead (I/O and CPU usage) before running the operation, especially on larger datasets.

By breaking these myths, you can make better decisions when using OPTIMIZE TABLE to keep your database healthy without causing unnecessary downtime or performance hits.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

Oracle AWR Scripts Decoded .. No More Guessing!

Posted by FatDBA on July 29, 2025

Recently, someone asked me why there are so many AWR-like files in the directory and whether they are as useful as the well-known awrrpt.sql. I took the opportunity to explain what I knew about them and their purpose. Since I thought it could be helpful, I decided to share this insight with my readers as well.

If you’re into performance tuning in Oracle, very likey you’ve already used AWR reports. But then you open this directory: $ORACLE_HOME/rdbms/admin …. …and boom – you’re hit with a list of cryptic scripts: awrrpt.sql, awrgdrpi.sql, awrsqrpt.sql, awrextr.sql

What do they all do?
When should you use which one?
Why are they named like 90s DOS files 🙂 ?

Let’s keep it short and sharp. Here’s your point-to-point breakdown of the most important AWR scripts.

Before you go running any of these scripts – make sure you have the Oracle Diagnostic Pack license.
AWR stuff is not free.

I grouped them into logical chunks – reports, comparisons, SQLs, data movement, etc.

Performance Reports

These are the most common AWR reports you run to analyze performance between 2 snapshots.

ScriptWhat it does
awrrpt.sqlGenerates AWR report for the current instance (for single instance DBs)
awrrpti.sqlSame as above, but lets you select another DBID or instance (useful for RAC)
awrgrpt.sqlAWR Global Report – gives a full RAC-wide view
awrgrpti.sqlSame as above, but lets you pick another DBID/instance

Example:
You’re troubleshooting high CPU on node 2 of your RAC? Use awrrpti.sql.

Comparison Reports

These help you compare two different time ranges – maybe before and after a patch, or different load periods.

ScriptWhat it does
awrddrpt.sqlCompares two AWR snapshots (date diff) – for a single instance
awrddrpi.sqlSame as above, for another dbid/instance
awrgdrpt.sqlGlobal RAC diff report (current RAC)
awrgdrpi.sqlGlobal RAC diff report (another dbid/instance)

Use these when you wanna say, “Hey, this new code made the DB slower… prove it!”

Want to see what a particular SQL is doing? These are your tools.

ScriptWhat it does
awrsqrpt.sqlSQL report for a specific SQL_ID in the current instance
awrsqrpi.sqlSame thing but lets you pick another dbid/instance

You’ll be surprised how useful this is when hunting bad queries.

Sometimes, you need to take AWR data from one system and analyze it somewhere else (like test or dev).

ScriptWhat it does
awrextr.sqlExport AWR data using datapump
awrload.sqlImport AWR data using datapump

This is actually gold when working on performance issues across environments.

Helper / Utility Scripts

These are mostly helper scripts to define input or make reports more automated or interactive.

ScriptWhat it does
perfhubrpt.sqlGenerates a fancy interactive Performance Hub report
awrinpnm.sqlInput name helper for AWR
awrinput.sqlGet inputs before running AWR reports
awrddinp.sqlInput helper for diff reports
awrgdinp.sqlInput helper for RAC diff reports

What’s with these weird script names?

Yeah, all these awrsqrpi.sql, awrgdrpt.sql, etc. – they look like random garbage at first.
But there’s actually some logic.

Here’s how to decode them:

AbbreviationMeans
awrAutomatic Workload Repository
rpt or rpReport
iLets you select specific instance or DBID
gGlobal report for RAC
d or ddDiff reports (comparing two snapshots)
sqSQL
inpInput helper

So awrsqrpi.sql = AWR SQL Report for a different instance
And awrgdrpi.sql = AWR Global Diff Report for another DBID/instance

So Which Script Should I Use?

Here’s a quick cheat sheet:

TaskScript
Normal AWR report (single instance)awrrpt.sql
AWR report for RAC (global view)awrgrpt.sql
SQL performance reportawrsqrpt.sql
Compare two AWR reportsawrddrpt.sql
Export/import AWR dataawrextr.sql and awrload.sql

If you’re doing anything with RAC – prefer the ones with g in them.
If you’re automating – use the *inp*.sql files.

Final Thoughts

Yes, the names are ugly.
Yes, the syntax is old-school.
But honestly? These AWR scripts are still some of the best tools you have for DB performance analysis.

Just remember:

  • Don’t use them without a valid license
  • Learn the naming pattern once – and it gets way easier
  • Practice running different ones on test systems

And next time someone complains, “The database is slow” … you know exactly which script to run.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

fatdba explores Vector Search in Oracle 23ai

Posted by FatDBA on July 23, 2025

So Oracle rolled out 23ai a while back and like every major release, it came packed with some really cool and interesting features. One that definitely caught my eye was Vector Search. I couldn’t resist diving in… and recently I explored it in depth and would like to share a though on this subject.

You see, we’ve been doing LIKE '%tax policy%' since forever. But now, Oracle’s SQL has become more powerful. Not only does it match words … it matches meaning.

So here’s me trying to explain what vector search is, how Oracle does it, why you’d care, and some examples that’ll hopefully make it click.

What’s Vector Search, Anyway?

Alright, imagine this:

You have a table of products. You search for “lightweight laptop for travel”.
Some entries say “ultrabook”, others say “portable notebook”, and none mention “lightweight” or “travel”. Old-school SQL would’ve said: “No Matches Found”

But with vector search, it gets it. Oracle turns all that text into math .. basically, a long list of numbers called a vector … and compares meanings instead of words.

So What’s a Vector?

When we say “vector” in vector search, we’re not talking about geometry class. In the world of AI and databases, a vector is just a long list of numbers … each number representing some aspect or feature of the original input (like a sentence, product description, image, etc.).

Here’s a basic example:
[0.12, -0.45, 0.88, …, 0.03]
This is a vector … maybe a 512 or 1536-dimension one .. depending on the embedding model used (like OpenAI, Oracle’s built-in model, Cohere, etc.).

Each number in this list is abstract, but together they represent the essence or meaning of your data.

Let’s say you have these two phrases:
“Apple is a tech company”
“iPhone maker based in California”

Now, even though they don’t share many words, they mean nearly the same thing. When passed through an embedding model, both phrases are converted into vectors:

Vector A: [0.21, -0.32, 0.76, …, 0.02]
Vector B: [0.22, -0.30, 0.74, …, 0.01]

They look very close … and that’s exactly the point.

What Oracle 23ai Gives You

  • A new VECTOR datatype (yeah!)
  • AI_VECTOR() function to convert text into vectors
  • VECTOR_INDEX to make search blazing fast
  • VECTOR_DISTANCE() to measure similarity
  • It’s all native in SQL ..no need for another vector DB bolted on

Let’s Build Something Step-by-Step

We’ll build a simple product table and do a vector search on it.

Step 1: Create the table

CREATE TABLE products (
  product_id     NUMBER PRIMARY KEY,
  product_name   VARCHAR2(100),
  description    VARCHAR2(1000),
  embedding      VECTOR(1536)
);

1536? Yeah, that’s the number of dimensions from Oracle’s built-in embedding model. Depends on which one you use.

Step 2: Generate vector embeddings

UPDATE products
SET embedding = ai_vector('text_embedding', description);

This’ll take the description, pass it through Oracle’s AI model, and give you a vector. Magic.

Step 3: Create the vector index

CREATE VECTOR INDEX product_vec_idx
ON products (embedding)
WITH (DISTANCE METRIC COSINE);

This speeds up the similarity comparisons … much like an index does for normal WHERE clauses.

Step 4: Semantic Search in SQL

SELECT product_id, product_name, 
       VECTOR_DISTANCE(embedding, ai_vector('text_embedding', 'light laptop for designers')) AS score
FROM products
ORDER BY score
FETCH FIRST 5 ROWS ONLY;

Now we’re searching for meaning, not words.

VECTOR_DISTANCE Breakdown

You can use different math behind the scenes:

VECTOR_DISTANCE(v1, v2 USING COSINE)
VECTOR_DISTANCE(v1, v2 USING EUCLIDEAN)
VECTOR_DISTANCE(v1, v2 USING DOT_PRODUCT)

Cosine is the usual go-to for text. Oracle handles the rest for you.

Use Cases You’ll Actually Care About

1. Semantic Product Search — “Fast shoes for runners” => shows “Nike Vaporfly”, even if it doesn’t say “fast”.

2. Similar Document Retrieval — Find all NDAs that look like this one (even with totally different words).

3. Customer Ticket Suggestion — Auto-suggest resolutions from past tickets. Saves your support team hours.

4. Content Recommendation — “People who read this also read…” kind of stuff. Easy to build now.

5. Risk or Fraud Pattern Matching — Find transactions that feel like fraud ..even if the details don’t match 1:1.

I know it might sound little confusing .. lets do a Onwe more example : Legal Document Matching

CREATE TABLE legal_docs (
  doc_id       NUMBER PRIMARY KEY,
  title        VARCHAR2(255),
  content      CLOB,
  content_vec  VECTOR(1536)
);

Update vectors:

UPDATE legal_docs
SET content_vec = ai_vector('text_embedding', content);

Now find similar docs:

SELECT doc_id, title
FROM legal_docs
ORDER BY VECTOR_DISTANCE(content_vec, ai_vector('text_embedding', 'confidentiality in government contracts'))
FETCH FIRST 10 ROWS ONLY;

That’s it. You’re officially building an AI-powered legal search engine.

Things to Know

  • Creating vectors can be heavy .. batch it.
  • Indexing speeds up similarity search a lot.
  • Combine with normal filters for best results:
SELECT * FROM products
WHERE category = 'laptop'
ORDER BY VECTOR_DISTANCE(embedding, ai_vector('gaming laptop under 1kg'))
FETCH FIRST 5 ROWS ONLY;

Final Thoughts from fatdba

I’m honestly impressed. Oracle took something that felt like ML black magic and put it right in SQL. No external service. No complicated setups. Just regular SQL, but smater.

Hope It Helped!
Prashant Dixit
Database Architect @RENAPS
Reach us at : https://renaps.com/

Posted in Uncategorized | Tagged: , , , , , , , , , , , , , , | Leave a Comment »