From AWR to Answers: Building a Local RAG Assistant with Ollama and Oracle AI Database 26ai

Oracle’s database instrumentation generates a huge amount of performance telemetry and AWR reports can still feel like a wall of text when you just want clear, targeted answers. In this post, I show how I turned a 24‑hour AWR report into an interactive performance tuning assistant using a local LLM (via Ollama on Windows 11) and Oracle AI Database 26ai running inside a VirtualBox VM.

This experiment was inspired by Lorenzo De Marchis’ “From theory to practice: building a RAG pipeline with LangChain and Oracle AI Database 26ai in Python,” where Oracle AI Database 26ai acts as the vector store behind a RAG pipeline. I kept the same core idea—Oracle as the vector database, Python as the glue—and code but adapted it to a hybrid lab setup with Ollama, VirtualBox, and an AWR‑specific use case. But also inspired by several customers that during last year’s Oracle AI World mentioned they are uploading AWR reports to online AI LLM’s to ask questions about them.



Rather than manually scrolling through an AWR HTML report, I can now ask questions like “Summarize the top performance issues in this 24‑hour period” and let the assistant answer using only the information embedded from that report.


Architecture overview

At a high level, the architecture combines a Windows host, a VirtualBox VM, and Oracle 26ai’s native vector search.

  • Windows 11 gaming PC
    Runs Ollama as the local inference server. I chose this PC because I have an NVIDIA RTX3060 GPU and may retrieve results faster.
  • VirtualBox VM running Oracle Linux 9
    Hosts Oracle AI Database 26ai, which stores the AWR report, embedding vectors, and metadata in regular tables plus a vector index and hosts the Python application that orchestrates retrieval and generation.
  • Python RAG layer
    Connects to Oracle AI Database 26ai via the Python LangChain driver, performs similarity search using SQL over the vector column, and calls Ollama’s HTTP API to generate grounded answers.

Conceptually, the design mirrors Lorenzo’s rag.py module: the database is responsible for similarity search, and the language model is responsible for natural‑language generation. This keeps data, embeddings, and retrieval inside Oracle, while the LLM remains fully local on your own hardware.

This is very important if you want to prevent your data from leaving your premises or avoid token fees.


Why AWR plus a local RAG assistant?

AWR reports are extremely rich: they summarize instance statistics, wait events, time model data, and “Top SQL” sections between two snapshot IDs. They are necessary for serious performance diagnostics, but they are not always friendly to read under time pressure.

A local RAG assistant addresses three pain points:

  • Navigating complexity: Instead of remembering exactly which section covers which metric, you can ask natural questions such as “Which wait events dominate DB time?” or “Do we have CPU or I/O bottlenecks?” and let retrieval pick the right section.
  • Keeping data on‑premises: AWR content is operationally sensitive. By running Oracle AI Database 26ai in a VirtualBox VM and Ollama on a local PC, you keep all data and prompts within your controlled environment.
  • Experimenting with models: Ollama makes it easy to try different LLMs (Llama 3, Mistral, etc.) against the same AWR corpus by changing a model name, without rebuilding your pipeline.

The result is not meant to replace expert tuning but to act as a fast‑feedback assistant that helps you spot patterns and narrow down questions more quickly.


Preparing the AWR data

Generating a 24‑hour AWR report

First, I collected the baseline AWR report from the test database. Oracle documents the standard scripts for generating AWR reports, such as awrgrpt.sql and awrrpt.sql, which can output HTML or text. However, I’m using the DBMS_WORKLOAD_REPOSITORY package to generate the report in text format.

From SQL*Plus on the test database:

SQL> SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.awr_report_text(db_id, inst_id, start_snap, end_snap));

This produces a text output that I can easily spool into a file like awr_123_145.txt that contains all the usual sections (Instance Efficiency, Load Profile, Top Timed Events, SQL ordered by Elapsed Time, and so on).

I then copy this TXT file into a working directory accessible from the machine where I run the Python ingestion script (in my case, the Linux host where I also run Oracle and the Python RAG code).


Ingesting and chunking the AWR report with Python and LangChain

Next, we use Lorenzo’s Python ingest.py to:

  1. Parse the AWR text file.
  2. Split it into meaningful chunks.
  3. Generate embeddings for each chunk.
  4. Insert the chunks and embeddings into the Oracle AI Database using the Oracle Python driver.

I also needed to modify the ingest.py file because Ollama LLM is hosted in a deifferent machine. So, I installed Ollama locally in the Linux host and then modified the configuration setting the OLLAMA_HOST variable in order to connect to my AIServer hosts (Windows 11).

OLLAMA_HOST=http://AIServer:11434

In a production‑quality version, you would parse the report header to extract snapshot timestamps and normalize them into TIMESTAMP columns. You might also refine the chunking strategy to align with specific AWR sections such as “SQL ordered by Elapsed Time” or “Top Timed Events” for more precise retrieval.


Building the retrieval query in Oracle AI Database 26ai

At query time, I use Lorenzo’s rag.py script. I just pass the prompt and receive the output from the LLM using RAG augmentation from the Oracle database.

$ python3.11 -m app.rag "Summarize the main performance issues highlighted in this AWR"

Example questions for the AWR assistant

Once the pipeline is in place, you can treat the AWR report as a conversational knowledge base. Here are some practical queries that work well:

  • “Summarize the main performance issues highlighted in this AWR.”
  • “Which wait events contributed most to DB time, and during which hours were they highest?”
  • “List the top 5 SQL statements by elapsed time and explain in plain language what they are doing.”
  • “Does the report suggest CPU bottlenecks, I/O bottlenecks, or something else?”
  • “Based on this report, what are three concrete tuning actions I should consider first?”

A typical interaction might look like this (paraphrased):

  • Question: “What are the top three tuning actions suggested by this AWR?”
  • Assistant: “The top three tuning actions suggested by this AWR are:
  • Investigate the cause for high “Disk file I/O Calibration” waits.
  • Investigate the cause for high “Disk file I/O Calibration” waits in Module “DBMS_SCHEDULER”.
  • Investigate the cause for high “Disk file I/O Calibration” waits in Service “SYS$USERS”.”

Because the assistant is restricted to the retrieved chunks, its answers remain grounded in actual AWR sections rather than hallucinated generic advice.


Here’s the final prompt after tuning it several times.


You are an expert Oracle Database Administrator with deep expertise in performance analysis and AWR report interpretation. Your task is to thoroughly analyze the provided AWR report and deliver a comprehensive performance analysis.

ANALYSIS REQUIREMENTS:

  1. TOP EVENTS IDENTIFICATION:
  • Extract and rank the top 10 wait events from the AWR report
  • Identify the top 5 SQL statements by elapsed time and CPU time
  • Highlight any significant changes in system metrics compared to previous snapshots
  1. ROOT CAUSE ANALYSIS:
  • For each top event, explain:
    • What type of wait event it is (CPU, I/O, Latch, Lock, etc.)
    • Why it’s occurring (resource contention, inefficient queries, configuration issues)
  • The impact on database performance
  • Analyze the correlation between events and SQL statements
  1. DETAILED RECOMMENDATIONS:
  • Provide specific, actionable steps to resolve each issue
  • Include SQL optimization suggestions for top SQL statements
  • Recommend database configuration changes if needed
  • Suggest hardware or infrastructure improvements if applicable
  • Include monitoring recommendations to prevent recurrence
  1. REPORT STRUCTURE:
  • Executive summary of key findings
  • Detailed event analysis with root cause explanation
  • Prioritized remediation steps
  • Performance improvement estimates
  • Follow-up actions and monitoring requirements

OUTPUT FORMAT:

  • Clear, technical language appropriate for DBA audience
  • Include specific Oracle commands where applicable
  • Provide justification for each recommendation
  • Highlight critical issues requiring immediate attention

INPUT: AWR data is already ingested. Use RAG.

Provide a comprehensive analysis that enables the DBA to quickly identify, understand, and resolve performance bottlenecks.


**Performance Analysis Report**

**Top Events Identification**

1. Top 10 wait events:
        * db file sequential read (14.34%)
        * db file scattered read (12.55%)
        * latch: cache buffers chains (8.42%)
        * latch: enqueue (5.67%)
        * CPU time (4.22%)
        * latch: shared pool (3.91%)
        * wait on library cache (3.56%)
        * enq: TX - row lock contention (2.95%)
        * latch: redo allocation (2.73%)
        * log file sync (2.46%)
2. Top 5 SQL statements by elapsed time:
        * SQL_ID "r6swge3rhwsa3" (96% of database time)
        * SQL_ID "ws4dr3ag24rfg" (100% of database time)
        * SQL_ID "5tm1ruvkx3wsx" (4% of database time)
        * SQL_ID "tf4dus7dsfgh" (2% of database time)
        * SQL_ID "gv3dsh4jklmn" (1% of database time)
3. Significant changes in system metrics:
        * Total database time was 7 seconds.
        * Average number of active sessions was 0.

**Root Cause Analysis**

1. db file sequential read (14.34%):
        * This wait event indicates that the database is spending a significant amount of time waiting for sequential reads from disk.
        * Possible causes:
                + Inefficient queries or indexes.
                + Insufficient disk space or slow disk performance.
                + Configuration issues (e.g., inadequate block size).
        * Impact on performance: High wait times can lead to increased response times and decreased throughput.
2. db file scattered read (12.55%):
        * This wait event indicates that the database is spending a significant amount of time waiting for scattered reads from disk.
        * Possible causes:
                + Inefficient queries or indexes.
                + Insufficient disk space or slow disk performance.
                + Configuration issues (e.g., inadequate block size).
        * Impact on performance: High wait times can lead to increased response times and decreased throughput.
3. latch: cache buffers chains (8.42%):
        * This wait event indicates that the database is spending a significant amount of time waiting for latches on the cache buffers chains.
        * Possible causes:
                + Insufficient cache size or inadequate memory allocation.
                + Inefficient queries or indexes.
                + Configuration issues (e.g., inadequate latch size).
        * Impact on performance: High wait times can lead to increased response times and decreased throughput.
4. latch: enqueue (5.67%):
        * This wait event indicates that the database is spending a significant amount of time waiting for latches on the enqueue structure.
        * Possible causes:
                + Insufficient enqueue size or inadequate memory allocation.
                + Inefficient queries or indexes.
                + Configuration issues (e.g., inadequate latch size).
        * Impact on performance: High wait times can lead to increased response times and decreased throughput.

**Detailed Recommendations**

1. Investigate the PL/SQL statement with SQL_ID "r6swge3rhwsa3" for possible performance improvements.
        * Supplement the information given here with an ASH report for this SQL_ID.
2. Optimize the queries and indexes related to the SQL statements with SQL_ID "r6swge3rhwsa3" and "ws4dr3ag24rfg".
        * Use the SQL Tuning Advisor to identify and optimize these queries.
3. Review and adjust the database configuration to ensure adequate cache size, memory allocation, and latch size.
4. Consider adding more disk space or upgrading the disk performance to reduce wait times for sequential and scattered reads.
5. Review and adjust the database configuration to ensure adequate enqueue size and memory allocation.

**Monitoring Recommendations**

1. Monitor the wait times and latch waits for the identified SQL statements and queries.
2. Monitor the cache size, memory allocation, and latch size to ensure they are adequate.
3. Monitor the disk space and performance to ensure they are sufficient.

**Performance Improvement Estimates**

1. Estimated benefit of optimizing the PL/SQL statement with SQL_ID "r6swge3rhwsa3" is 0.02 active sessions and 15.56% of total activity.

**Follow-up Actions and Monitoring Requirements**

1. Schedule regular monitoring and analysis of the identified SQL statements and queries.
2. Review and adjust the database configuration as needed to ensure adequate resources.
3. Consider adding more disk space or upgrading the disk performance to reduce wait times for sequential and scattered reads.

Note: This report is a summary of the top events and recommendations. For a detailed analysis, please refer to the original AWR report and ADDM report.

Thanks,
Alfredo

Oracle EM Agent Preferred Connect Descriptor

The USE_SID_AS_SERVICE_LISTENER=ON saga

If you are migrating to multi-tenant architecture of the Oracle database, you’ll find out that in order to connect to the pluggable database (PDB) you must use the service name for the PDB. But, what if you have a legacy application that can only use SID instead of the service name?

For that, you can enable the parameter USE_SID_AS_SERVICE_LISTENER in the listener configuration, so the service name will act as an SID.

Problem solved, right? Well, the problem is now with applications that require the service name and not the SID. One of them is Oracle Enterprise Manager (EM). If you turn on this parameter at the listener level then, you won’t be able to connect to the databases and they will show down (red) in the EM console.

So, what can we do? In EM 13.5 RU16 or higher there’s functionality that allows EM to connect to the database by using an Agent preferred connect descriptor. This way, you can provide the connect descriptor using the service name.

One important note to mention is that you must have RU16 or higher applied to both, the OMS and the Agents in order to enable this functionality.

Follow the My Oracle Support Note “EM 13c: How To Enable Agent Preferred Connect Descriptor Support For RAC Instance, Single Instance DB (Doc ID 2963079.1) ” to enable this functionality. You will have to restart your OMS, so be prepared.

After you enable this setting, navigate to the Monitoring Configuration of your target and look for the new Agent preferred descriptor section. It looks like the image below.

Add the proper connect descriptor and viola! EM is now able to connect to your target.

Is always recommended to turn the USE_SID_AS_SERVICE_LISTENER parameter off after migrations and upgrades. Sometimes this is not an option if you have legacy applications though. This feature allows you connect your EM when the parameter USE_SID_AS_SERVICE_LISTENER must be turned on.

Thanks,
Alfredo

Monitor Oracle DBs in RDS With Enterprise Manager

In this blog I want to provide some important steps if you want to monitor Oracle databases running on AWS RDS.

The first step is to have an Oracle Enterprise Manager (EM) installation. This can be on-prem or running on EC2. If you need to setup EM in EC2 you can use one of the pre-built images that already contains an Oracle database. You are going to need this database for the EM repository.
One quick example can be the Oracle database 19c image by cloudimg running on Oracle Linux. Just make sure you add enough space in the filesystem so you can also install the EM software.

Once the EC2 instance is up and running, go ahead and patch the Oracle database with the latest RU (RU28 as of today).

Download the Oracle EM 13.5 software from oracle.com and also download the latest RU available (RU27 as of today).

Install EM 13.5 and patch it with RU27. Follow below videos if you need additional guidance on how to install and patch EM.



Next we are going to follow the instructions that RDS has available for the EM agent installation. Remember that because there’s no actual access to the RDS host, we can’t install the EM agent using the traditional way.



Verify all the network pre-reqs so your RDS instance can actually reach your EM installation running on EC2. If you EM is running on-prem then a VPN setup will be required.

Now is time to create an option group once all the network pre-reqs are met. The option group is how we actually ask the RDS console to provide us the option to deploy an EM agent. The option for the option group needs to be OEM_AGENT.

Before you deploy the option group to your RDS instance, you will have to provide information about your EM instance, including:

  • EM Host Name
  • EM upload port
  • EM registration key

Then deploy the option group to your RDS instance.

Once the deployment finishes, you will actually have the EM agent up and running in your RDS instance and showing up in the EM console.

The next step is to discover your DB instance and the listener. Follow below link for the documentation.



Hope this set of steps help you with the monitoring setup of your RDS database with Oracle Enterprise Manager.

Thanks,
Alfredo

Monitor MySQL HeatWave with OCI Database Management

In this post I want to show you how you can monitor your MySQL Heatwave databases with OCI Database Management Cloud Service. Is worth to mention that this Cloud Service provides this functionality free of cost and is automatically enabled for below configurations:

  • Standalone
  • High Availability

Let’s start by creating a new Heatwave instance.

You will notice that Database Management is automatically enabled.

The Database Management service will then publish several metrics related to your Heatwave instance.

You can of course use the alarms and notification services to setup alerts on important metrics.

Last but not least navigate to the Performance Hub page to find out real time performance information for the queries inside your Heatwave instance.

If you want to learn more about this service follow the link below.



Thanks,
Alfredo

Patch Oracle Databases With Ansible and Enterprise Manager 13c

In a previous post I show you how you can integrate DevOps automation and orchestration tools to provision Oracle databases by leveraging Enterprise Manager (EM) and the Cloud Management Pack (CMP). Once provisioned, databases need to be fully maintained in terms of monitoring but most precisely patching.

Patching databases decreases the risk of breach by timely mitigating vulnerabilities, but in can be a daunting task for organizations. Manual patching is time consuming and error prone. Home grown scripts are difficult to maintain and they increase maintenance cost. So the question is, how can I automate the patching process and even better, how can I integrate it with my current orchestration workflow?

Let me explain how you can achieve all this by making use of Oracle’s Database Lifecycle Management Pack (DBLM) and CMP. DBLM’s Fleet Maintenance help us patch and upgrade Oracle databases at scale with minimum downtime. It makes use of Gold Images and out-of-place patching in order to achieve this.

Fleet Maintenance Benefits

All this functionality can be integrated with CMP’s DBaaS in order to provide and end-to-end automation solution. DBaaS exposes REST APIs that we could then call using the automation tool of choice. Database Administrators, end users or 3rd party tools can then use these features to patch Oracle databases.

DBaaS Automation Diagram

Do you want to learn more about this and even be able to try it? We’ve created an Oracle LiveLabs that cover’s all this functionality. This lab will guide you through the request of a PDB, setup DBaaS configuration, setup Fleet Maintenance and finally patch the PDB.

Follow the link below for the Oracle LiveLabs workshop.



If you are planning on attending to Oracle Cloud World this year and you want to learn more about this consider attending my session.

LRN3519: Deploy and Manage Oracle Databases with Ansible, Oracle Enterprise Manager 

See you in Vegas!!!

Thanks,
Alfredo

Improve Oracle Database Security With Enterprise Manager 13c

IT Security is popular topic nowadays! We constantly hear news about data breaches, ransomware, malware, unauthorized access to IT systems, etc. IT organizations are constantly looking to keep their systems, networks and data safe and secure.

Today’s blog is about how Oracle Enterprise Manager (EM) can help Database Administrators to secure and harden the Oracle Databases they manage along with the hosts those databases are running on.

First things first. I strongly recommend to review the Oracle Database 19c Security Guide. This guide provides guidelines on how to secure the Oracle Database, harden the DB access, secure and encrypt the DB data and so.



Now let’s discuss some areas that database administrators should also look at in order to improve their security posture:

  • Timely apply security patches
  • Monitor database configuration and detect misconfigurations
  • Use industry and regulatory standards like STIG and CIS for the Oracle Database

All the features that we will be discussing today are part of the Oracle Database Lifecycle Management pack. This pack requires an additional license.

Timely apply security patches

Fleet Maintenance (FM) enables administrators to automate lifecycle activities such as database patching and upgrades. FM is a gold image subscription based model that allows to patch databases with minimum downtime by using out-of-place patching mechanisms. In-place patching is also available if you need to apply an emergency on-off patch.

Administrators have the ability to customize the patching process by adding custom pre/post scripts to patching operations. FM supports single instance, RAC databases, Grid Infrastructure, Multitenant and Data Guard configurations.

One thing to mention is the ability to get security patch recommendations as soon as they are published. EM connects to My Oracle Support (MOS) and checks for the availability of new security patches. As soon as a new security patch is released EM will let you know if your DB estate is compliant or not in terms of these patches.



Monitor database configuration and detect misconfigurations

Configuration and Drift Management helps you monitor the configuration of your DB estate, the hosts on where those DB’s are running as well as the Oracle Homes (OH) for those installations. EM allows you to create your own configuration templates based on the configuration settings you need to enforce. Any misconfiguration or drift away of your template will be automatically reported via the Drift Management dashboard and you can also receive alerts if you choose to.

Corrective Actions (CA) can also be created to automatically fix this misconfigurations in order to comply with the templates and reduce security risks.

How many times administrators issued an ALTER SYSTEM command with SPFILE scope and forgot about it? Well, you will know next time you bring your DB up after maintenance. EM helps you detect these changes before they become a production issue. It also help you track the history of configuration changes, save configuration information at a given time and also allows you to use this configuration information to be compared between targets.

Have you wonder, how many OH’s we have with this specific one-off patch?

How many DB’s we have running on this specific OS version?

Well, EM can help you answer all these questions using this configuration data.

One thing worth mentioning is that EM comes with hundreds on configuration collections. If you need to gather a very specific configuration that is not available out-of-the-box, you can create your own configuration extension and collect this automatically.



Use industry and regulatory standards like STIG and CIS for the Oracle Database

EM provides compliance standards to help customers meet regulatory standards like STIG and CIS. Oracle’s best practices are also included within the compliance framework. There are two available options for analysis.

  • Rule based analysis
  • Real-time change

Each option allow administrators understand where attentions needs to be put in order to harden the DB estate.

Using the compliance framework, EM will provide a score to each associated target along with all the violations that need to be remediated after each evaluation.



I also want to provide links to Oracle LiveLabs workshops available that cover the features discussed above.

Thanks,
Alfredo

Oracle Enterprise Manager 13c Snap Clone Demo

The database cloning process can be often time consuming, resource intensive and expensive especially for large multi-terabyte databases. This can lead to miss project deadlines or to cause that storage costs grow out of control.

Enterprise Manager 13c Snap Clone instant database cloning allows administrators to create fully functional copies of databases using the capabilities of the underlying storage layer. Snap Clone is also capable to use Data Masking Definitions in order to securely manage test data.

Below video is a demonstration on how to setup Snap Clone with the ZFS storage appliance in order to clone either single instance or pluggable databases (PDBs).

For additional information about Snap Clone capabilities click below.



Thanks,
Alfredo

Oracle Database Performance Diagnostics, Tuning and Validation with Oracle Enterprise Manager 13c Workshops

Last month we held 2 workshops where we went through some capabilities of Oracle Enterprise Manager 13c that help with performance diagnostics, tuning and validation for the Oracle Database.

If you were not able to attend and you want to learn about these capabilities, I recommend you take a look at the recording using the link below.


Session 1: August 4th 2021


Session 2: August 25th 2021


Thanks,
Alfredo

Oracle Database 12.2 New Features – SQL* Plus Enhancements

I want to start a series of posts about Oracle Database 12.2 new features this new year 2018.
There’s no better start of this series than to start with SQL*Plus.
SQL*Plus is probably one of the most utilized tools by DBA’s (sqlcl is gaining steam) and here I show some really cool new features.

SQL*Plus History

With this command HIST[ORY] and if turned on; you can run, edit, delete or list previously used SQL or PL/SQL commands for the current session.
  
 $ sqlplus sys as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Fri Jan 5 12:28:58 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL> show history
history is OFF
SQL> set history on
SQL> show history
history is ON and set to “100”
SQL> set history 1000
SQL> show history
history is ON and set to “1000”
SQL> show user
USER is “SYS”
SQL> desc dual
 Name                                                      Null?    Type
 —————————————– ——– —————————-
 DUMMY                                                                 VARCHAR2(1)
SQL> select * from dual;
D
X
SQL> hist
  1  show history
  2  show user
  3  desc dual
  4  select * from dual;
SQL> hist 4 run
D
X
SQL>

SQL*Plus SET MARKUP CSV

This setting is going to present the output is CVS format. 
SQL> set markup csv on
SQL> select * from emp;
“EMPNO”,”ENAME”,”JOB”,”MGR”,”HIREDATE”,”SAL”,”COMM”,”DEPTNO”
7839,”KING”,”PRESIDENT”,,”17-NOV-81″,5000,,10
7698,”BLAKE”,”MANAGER”,7839,”01-MAY-81″,2850,,30
7782,”CLARK”,”MANAGER”,7839,”09-JUN-81″,2450,,10
7566,”JONES”,”MANAGER”,7839,”02-APR-81″,2975,,20
7788,”SCOTT”,”ANALYST”,7566,”19-APR-87″,3000,,20
7902,”FORD”,”ANALYST”,7566,”03-DEC-81″,3000,,20
7369,”SMITH”,”CLERK”,7902,”17-DEC-80″,800,,20
7499,”ALLEN”,”SALESMAN”,7698,”20-FEB-81″,1600,300,30
7521,”WARD”,”SALESMAN”,7698,”22-FEB-81″,1250,500,30
7654,”MARTIN”,”SALESMAN”,7698,”28-SEP-81″,1250,1400,30
7844,”TURNER”,”SALESMAN”,7698,”08-SEP-81″,1500,0,30
7876,”ADAMS”,”CLERK”,7788,”23-MAY-87″,1100,,20
7900,”JAMES”,”CLERK”,7698,”03-DEC-81″,950,,30
7934,”MILLER”,”CLERK”,7782,”23-JAN-82″,1300,,10
14 rows selected.

SQL*Plus SET FEEDBACK ONLY

This option will display the number of rows selected without displaying the data. Useful to measure fetch time.
SQL> set feedback only
SQL> set timing on
SQL> select * from emp;
14 rows selected.
Elapsed: 00:00:00.01

SQL*Plus Performance Settings


SET ROWPREFECTH (default 1 | 2G max)
This setting pre-fetches rows in a result set. It can reduce the number of round trips between OCI execute calls.
I had an interesting question from a colleague about this setting. What is the difference between this ROWPREFECTH and ARRAYSIZE?
I’m still struggling to find the differences. Here are both definitions extracted from Oracle’s documentation:
SET ARRAYSIZE
Sets the number of rows that SQL*Plus will fetch from the database at one time. Valid values are 1 to 5000.
The effectiveness of setting ARRAYSIZE depends on how well Oracle Database fills network packets and your network latency and throughput. In recent versions of SQL*Plus and Oracle Database, ARRAYSIZE may have little effect. Overlarge sizes can easily take more SQL*Plus memory which may decrease overall performance.

SET ROWPREFETCH {1 | n}

Sets the number of rows that SQL*Plus will prefetch from the database at one time.
The default value is 1.
What I can see from here is that ARRAYSIZE takes place during the fetch step and probably ROWPREFECTH is just before the fetch step?
I even tried to identify this by using the autotrace option and by tracing a test session, but unfortunately I was not able to find any differences. Maybe the amount of data queried wasn’t large enough.
$ sqlplus scott/****
SQL*Plus: Release 12.2.0.1.0 Production on Fri Jan 5 13:18:41 2018
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Last Successful login time: Fri Jan 05 2018 13:15:53 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL> set autotrace on
SQL> select * from emp;
14 rows selected.
Execution Plan
———————————————————-
Plan hash value: 3956160932
————————————————————————–
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time    |
————————————————————————–
|   0 | SELECT STATEMENT  |         |    14 |  1218 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |  1218 |     3   (0)| 00:00:01 |
————————————————————————–
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
Statistics
———————————————————-
            0  recursive calls
            0  db block gets
            8  consistent gets
            0  physical reads
            0  redo size
       1537  bytes sent via SQL*Net to client
          608  bytes received via SQL*Net from client
            2  SQL*Net roundtrips to/from client
            0  sorts (memory)
            0  sorts (disk)
           14  rows processed
SQL> set rowprefetch 10
SQL> select * from emp;
14 rows selected.
Execution Plan
———————————————————-
Plan hash value: 3956160932
————————————————————————–
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time    |
————————————————————————–
|   0 | SELECT STATEMENT  |         |    14 |  1218 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |  1218 |     3   (0)| 00:00:01 |
————————————————————————–
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
Statistics
———————————————————-
            0  recursive calls
            0  db block gets
            8  consistent gets
            0  physical reads
            0  redo size
       1534  bytes sent via SQL*Net to client
          608  bytes received via SQL*Net from client
            2  SQL*Net roundtrips to/from client
            0  sorts (memory)
            0  sorts (disk)
           14  rows processed
SQL> set rowprefetch 5
SQL> select * from emp;
14 rows selected.
Execution Plan
———————————————————-
Plan hash value: 3956160932
————————————————————————–
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time    |
————————————————————————–
|   0 | SELECT STATEMENT  |         |    14 |  1218 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |  1218 |     3   (0)| 00:00:01 |
————————————————————————–
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
Statistics
———————————————————-
            0  recursive calls
            0  db block gets
            8  consistent gets
            0  physical reads
            0  redo size
       1534  bytes sent via SQL*Net to client
          608  bytes received via SQL*Net from client
            2  SQL*Net roundtrips to/from client
            0  sorts (memory)
            0  sorts (disk)
           14  rows processed
SQL> set arraysize 5
SQL> select * from emp;
14 rows selected.
Execution Plan
———————————————————-
Plan hash value: 3956160932
————————————————————————–
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time    |
————————————————————————–
|   0 | SELECT STATEMENT  |         |    14 |  1218 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |  1218 |     3   (0)| 00:00:01 |
————————————————————————–
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
Statistics
———————————————————-
            0  recursive calls
            0  db block gets
            8  consistent gets
            0  physical reads
            0  redo size
       1534  bytes sent via SQL*Net to client
          608  bytes received via SQL*Net from client
            2  SQL*Net roundtrips to/from client
            0  sorts (memory)
            0  sorts (disk)
           14  rows processed
SQL> set rowprefetch 1
SQL> select * from emp;
14 rows selected.
Execution Plan
———————————————————-
Plan hash value: 3956160932
————————————————————————–
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time    |
————————————————————————–
|   0 | SELECT STATEMENT  |         |    14 |  1218 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |  1218 |     3   (0)| 00:00:01 |
————————————————————————–
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
Statistics
———————————————————-
            0  recursive calls
            0  db block gets
           10  consistent gets
            0  physical reads
            0  redo size
       1919  bytes sent via SQL*Net to client
          630  bytes received via SQL*Net from client
            4  SQL*Net roundtrips to/from client
            0  sorts (memory)
            0  sorts (disk)
           14  rows processed
As you can see the only visible changes are the SQL*Net roundtrips to/from client of the ARRAYSIZE setting.
SET LOBPREFETCH (default 0 bytes | 2G max)
Similar to ROWPREFECTH but for LOB data.

SET STATEMENTCACHE (default 0 | 32767 max)
This setting will cache similar SQL statements for the current session reducing the amount of necessary parses.
Thanks,

Alfredo

Free Database Seminar in Timisoara, Romania

Today I want to talk about a free Database Seminar that ACI is sponsoring in Timisoara, Romania. A friend of mine and former co-worker asked me to speak in the seminar about Oracle Enterprise Manager Administration. 

Because of the Time zone’s differences, I’m going to make a recording of the presentation and upload it to the blog, so he can show it there.

If you are nearby Timisoara, don’t loose the opportunity to assist and learn little bit more about databases.

The event is going to take place April, 8th at 5 pm @ City Business Centre, building D, Timisoara, RO.

Did I forget to mention that the event is for free?

For more information go to:






Thanks,

Alfredo