Select Page

Real Application Testing

Author: Dallas Willett | | April 6, 2011

As promised for our upcoming presentation at this year’s Collaborate 2011 Conference, we have our Real Application Testing scripts. At the conference, Datavail and WTS will show how we used Oracle’s Real Application Testing to tune a production workload from a JD Edwards EnterpriseOne database running on Oracle.

After capturing a production workload, we created a clone of the production database to a test system. We were then able to iteratively test this workload using Database Replay, analyze performance data from the tests and make tuning changes. We compared our performance data between different test runs to see how well we tuned the database against a real-life production workload. The results were pretty impressive. The two main scripts we created to do this were called test.sql and tune.sql. The contents of those scripts comprise the rest of this blog post.

Database Replay Script

The core of our Database Replay logic is in a script we called test.sql. We kicked this off and let the replay run for a couple hours before coming back and issuing our tuning commands.

test.sql

set trimspool on
spool replay.log append
set timing on
select * from v$instance;
pause Host should be:HOSTNAME. Continue with test?
set echo on
variable replay_nm varchar2(20)
exec :replay_nm := to_char(sysdate,'yyyymmdd_hh24miss')
select :replay_nm from dual;

-- save sql profiles from our tuning efforts between database replay runs
@@save_sql_profiles.sql

shutdown immediate
startup mount
flashback database to restore point pre_capture;
alter database open resetlogs;
create directory workload_dir as 'f:export_dir';

-- load back in the sql profiles after flashing back the database
@@load_sql_profiles.sql

-- create any new indexes from our tuning efforts
@@schema_changes.sql

exec dbms_workload_replay.initialize_replay(replay_name=>:replay_nm,
  replay_dir=>'WORKLOAD_DIR')

-- purge old replay information and remap connections
@@delete_old_replay_info.sql
@@remap_connections.sql

exec dbms_workload_replay.prepare_replay(think_time_scale=>0)

-- start the replay
!nohup ./run_wrc.bash
exec dbms_workload_replay.start_replay

-- monitor the replay and take AWR snapshots
@replay_minder.proc

exec dbms_scheduler.create_job(job_name=>'replay_minder_job',
  job_type=>'STORED_PROCEDURE', 
  job_action=>'replay_minder', enabled=>true)

col mins_completed format 9999.99
col elapsed_min format 9999.99
select duration_secs/60 duration_mins,
         (duration_secs * ((currscn - firstscn) / (lastscn - firstscn)))/60
         as mins_completed, elapsed_min
    from
      (select min(post_commit_scn) firstscn, max(post_commit_scn) lastscn
        from wrr$_replay_scn_order),
      (select max(clock) currscn from v$workload_replay_thread
       where wait_for_scn != 0 and session_type = 'REPLAY'),
      (select duration_secs from dba_workload_captures
       where id = (select capture_id from dba_workload_replays
                         where status = 'IN PROGRESS')),
      (select (sysdate-start_time)*24*60 elapsed_min
        from dba_workload_replays where status = 'IN PROGRESS');

save_sql_profiles.sql (called from test.sql)

The output from the SQL Tuning Advisor will include both index changes and new SQL Profiles for changes to query plans. When we flashback the database, we will lose the SQL Profiles stored in the database, so we use Data Pump commands to export the SQL Profiles, so we can load them back in later.

drop table SYSTEM.REPLAY_PROFILES;

exec dbms_sqltune.create_stgtab_sqlprof(table_name=>'REPLAY_PROFILES',
   schema_name=>'SYSTEM')   

exec dbms_sqltune.pack_stgtab_sqlprof(staging_table_name=>'REPLAY_PROFILES',
   staging_schema_owner=>'SYSTEM')

variable handle number
variable job_state varchar2(100)
exec :handle := dbms_datapump.open(operation=>'EXPORT', job_mode=>'TABLE')
exec dbms_datapump.add_file(handle=>:handle, filename=>'replay_profiles.dmp',
   directory=>'WORKLOAD_DIR', reusefile=>1)
exec dbms_datapump.add_file(handle=>:handle, filename=>'replay_profiles.log',
   directory=>'WORKLOAD_DIR', filetype=>dbms_datapump.ku$_file_type_log_file)
exec dbms_datapump.metadata_filter(handle=>:handle, name=>'SCHEMA_EXPR',
   value=>'IN (''SYSTEM'')')
exec dbms_datapump.metadata_filter(handle=>:handle, name=>'NAME_EXPR',
   value=>'IN (''REPLAY_PROFILES'')')
exec dbms_datapump.start_job(handle=>:handle)
exec dbms_datapump.wait_for_job(handle=>:handle, job_state=>:job_state)
print job_state

load_sql_profiles.sql (called from test.sql)

variable handle number
variable job_state varchar2(100)
exec :handle := dbms_datapump.open(operation=>'IMPORT', job_mode=>'FULL')
exec dbms_datapump.add_file(handle=>:handle, filename=>'replay_profiles.dmp',
   directory=>'WORKLOAD_DIR')
exec dbms_datapump.add_file(handle=>:handle, filename=>'replay_profiles.log',
   directory=>'WORKLOAD_DIR', filetype=>dbms_datapump.ku$_file_type_log_file)
exec dbms_datapump.start_job(handle=>:handle)
exec dbms_datapump.wait_for_job(handle=>:handle, job_state=>:job_state)
print job_state
  
exec dbms_sqltune.unpack_stgtab_sqlprof(replace=>TRUE,
   staging_table_name=>'replay_profiles', staging_schema_owner=>'system')

schema_changes.sql (called from test.sql)

The output of the SQL Tuning Advisor will also contain changes to indexes. We put those changes in a script called schema_changes.sql

CREATE INDEX ...;

delete_old_replay_info.sql (called from test.sql)

Before remapping our connections we needed to purge any old replay data that was loaded up when we initialized our replay.

begin
  for i in (select id from dba_workload_replays where name != :replay_nm) loop
    dbms_workload_replay.delete_replay_info(replay_id=>i.id);
  end loop;
end;
/

remap_connections.sql (called from test.sql)

We ran our Workload Replay Clients (wrc) on another host other than the test database host, so we needed to remap connections. We replaced the hostnames and IP addresses of the production system with the test system. The DBA_WORKLOAD_CONNECTION_MAP has all the mappings.

begin
  for i in (select conn_id, capture_conn
            from dba_workload_connection_map m,
                 dba_workload_replays r
            where replay_id = id
            and name = :replay_nm) loop
    if i.capture_conn like '%10.0.1.100%' then
        i.capture_conn := replace(i.capture_conn,'10.0.1.100','10.1.1.200');
        dbms_workload_replay.remap_connection(connection_id=>i.conn_id,
          replay_connection=>i.capture_conn);
    end if;
    if i.capture_conn like '%PRODDB%' then
        i.capture_conn := replace(i.capture_conn,'PRODDB','TESTDB');
        dbms_workload_replay.remap_connection(connection_id=>i.conn_id,
          replay_connection=>i.capture_conn);
    end if;
    if i.capture_conn like '%PROTOCOL=BEQ%' then
        i.capture_conn := '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)'||
          '(HOST=TESTDB)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PRODORCL)'||
          '(CID=(PROGRAM=D:oracleproduct10.2.0db_1binsqlplus.exe)'||
          '(HOST=TESTDB)(USER=ORACLE))))';
        dbms_workload_replay.remap_connection(connection_id=>i.conn_id,
         replay_connection=>i.capture_conn);
    end if;
  end loop;
  commit;
end;
/

run_wrc.bash (called from test.sql)

While our production database and cloned database were on Windows Server 2003, our replay clients were run from a Linux host. Here’s the shell script we used to kick off the Workload Replay Clients (wrc) during the Database Replay test.

#!/bin/bash
for i in {1..25} 
do
  wrc system/password@host/service_name replaydir=/u01/wkld debug=true workdir=/u01/wkld/wrc_trc &
  sleep 3
done

replay_minder.proc (called from test.sql)

We wanted to make sure that the AWR Snapshots that we took were consistent between Database Replay runs. To do that, we created a procedure to take the snapshots at particular SCNs during the replay. This script also terminates the replay after the final AWR Snapshot is taken.

create or replace procedure replay_minder
is
  duration number;
  first_snap boolean := false;
  so_far number;
  total number;
  pct_complete number;
begin
  select duration_secs/60 into duration from dba_workload_captures
   where id = (select capture_id from dba_workload_replays
                      where status = 'IN PROGRESS');
  dbms_system.ksdwrt(dest=>1, tst=>'Duration: '||duration);
  loop
    dbms_lock.sleep(30);
    select currscn - firstscn, lastscn - firstscn,
             (currscn - firstscn) / (lastscn - firstscn)
    into so_far, total, pct_complete
    from
      (select min(post_commit_scn) firstscn, 
                max(post_commit_scn) lastscn from wrr$_replay_scn_order),
      (select max(clock) currscn from v$workload_replay_thread
       where wait_for_scn != 0 and session_type = 'REPLAY');
    dbms_system.ksdwrt(dest=>1,
      tst=>'So Far: '||so_far||' ;  Total: '||total||' ;  Percent Complete: '||
     pct_complete);
    -- at about 10 minutes out of 180
    if ( first_snap = false and pct_complete >= .056 ) then
      dbms_system.ksdwrt(dest=>1, tst=>'Initial snapshot');
      dbms_workload_repository.create_snapshot(flush_level=>'ALL');
    first_snap := true;
    end if;
    -- at about 70 minutes out of 180
    if ( pct_complete >= .389 ) then
      dbms_system.ksdwrt(dest=>1, tst=>'Final snapshot and cancel');
      dbms_workload_repository.create_snapshot(flush_level=>'ALL');
      dbms_workload_replay.cancel_replay;
      exit;
    end if;
  end loop;
end;
/

SQL Tuning Advisor Script

Our tuning script creates a SQL Set from the two AWR Snapshots taken during the Database Replay. It orders the SQL statements by “elapsed_time” and pulls out the top-N statements (we tuned the top-100 SQL statements in our tests). The SQL Set is the input for the SQL Tuning Advisor. We found that Oracle will spend a few minutes on tuning any given SQL statement, and will take 1-3 hours to tune the top-100 SQL statements.

tune.sql

set trimspool on
spool optimizations.log append

accept v_result_limit prompt 'Enter the number of SQL statements to tune:'
accept v_dur_mins prompt 'Enter the number of minutes to let the tuning task run for:'

variable sqlset_nm varchar2(30)
exec :sqlset_nm := 'RAT_SQLSET_'||to_char(sysdate,'yyyymmdd_hh24miss')
print sqlset_nm

exec dbms_sqltune.create_sqlset(sqlset_name=>:sqlset_nm);

set serveroutput on
DECLARE
  cur DBMS_SQLTUNE.SQLSET_CURSOR;
  v_begin_snap number;
  v_end_snap number;
BEGIN
  SELECT max(snap_id) INTO v_end_snap
    FROM dba_hist_snapshot where snap_level=2;
  SELECT max(snap_id) INTO v_begin_snap
    FROM dba_hist_snapshot where snap_level=2 and snap_id v_begin_snap,
               end_snap=>v_end_snap,
               basic_filter=>'parsing_schema_name  ''SYS''',
               ranking_measure1=>'elapsed_time',
               result_limit => &v_result_limit,
               attribute_list=>'ALL')) P;

  DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => :sqlset_nm,
                           populate_cursor => cur,
                           load_option => 'MERGE',
                           update_option => 'ACCUMULATE');
END;
/


select sql_id, elapsed_time, executions, cpu_time,
         buffer_gets, disk_reads, optimizer_cost
from dba_sqlset_statements where sqlset_name = :sqlset_nm
 order by elapsed_time;

variable tuning_task varchar2(30)
exec :tuning_task := dbms_sqltune.create_tuning_task(sqlset_name => :sqlset_nm,
   time_limit => &v_dur_mins*60);
print tuning_task
print sqlset_nm
declare
 v_tuning_task varchar2(30) := :tuning_task;
 v_sqlset_name varchar2(30) := :sqlset_nm;
begin
 dbms_scheduler.create_job(job_name=>v_sqlset_name, job_type=>'PLSQL_BLOCK',
    job_action=> 'begin dbms_sqltune.execute_tuning_task('''||v_tuning_task||'''); end;',
    enabled=>true);
end;
/

exec dbms_lock.sleep(5);

select session_id, owner, job_name, to_char(elapsed_time,'HH24:MI:SS') elapsed_time
  from dba_scheduler_running_jobs;
select sid, sofar, totalwork, task_id, target_desc, start_time,
          last_update_time, findings
  from v$advisor_progress where sofar < totalwork;

We then issued the following commands to pull out the report and script generated by the SQL Tuning Advisor:

set long 5000000
prompt select dbms_sqltune.report_tuning_task(:tuning_task) from dual;
prompt select dbms_sqltune.script_tuning_task(:tuning_task) from dual;

We looked through the tuning report and made judgements about the individual recommendations from the SQL Tuning Advisor. Do we want to add this particular SQL Profile? Do we want to add that new index? For a given SQL statement, if we wanted to accept a SQL Profile, we would run the dbms_sqltune.accept_sql_profile command from the report or script. Likewise, if we want to accept a new index recommendation, we would take their CREATE INDEX command and place that in our schema_changes.sql file. After reviewing each SQL statement recommendation we were then ready start another round of testing with the new tuning changes. We judged our tuning efforts by looking at the reduction in “DB Time” from the AWR reports compared across different test runs. Dallas Willett Datavail – Remote DBA Services

How to Solve the Oracle Error ORA-12154: TNS:could not resolve the connect identifier specified

The “ORA-12154: TNS Oracle error message is very common for database administrators. Learn how to diagnose & resolve this common issue here today.

Vijay Muthu | February 4, 2021

How to Recover a Table from an Oracle 12c RMAN Backup

Our database experts explain how to recover and restore a table from an Oracle 12c RMAN Backup with this step-by-step blog. Read more.

Megan Elphingstone | February 2, 2017

Data Types: The Importance of Choosing the Correct Data Type

Most DBAs have struggled with the pros and cons of choosing one data type over another. This blog post discusses different situations.

Craig Mullins | October 11, 2017

Subscribe to Our Blog

Never miss a post! Stay up to date with the latest database, application and analytics tips and news. Delivered in a handy bi-weekly update straight to your inbox. You can unsubscribe at any time.

Work with Us

Let’s have a conversation about what you need to succeed and how we can help get you there.

CONTACT US

Work for Us

Where do you want to take your career? Explore exciting opportunities to join our team.

EXPLORE JOBS