The solution is to define the NLS_LANG as AL32UTF8.
Issue description:
SQL> select BCK_ALT_DSC from BACKOUT_REASON_CODE where BCK_CODE='BDCBK';
BCK_ALT_DSC
------------------------------
Refus paiement d¦bit bancaire
Solution:
/u01/app/oracle > export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
/u01/app/oracle > sqlplus florinm/florinm
SQL*Plus: Release 11.1.0.7.0 - Production on Fri Apr 15 14:17:48 2016
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning option
SQL> select BCK_ALT_DSC from BACKOUT_REASON_CODE where BCK_CODE='BDCBK';
BCK_ALT_DSC
--------------------------------------------------------------------------------
Refus paiement débit bancaire
Oracle DBA and beyond; these are practical tips for day to day DBA operation and maintenance; a place where you would come to look for a quick fix for a burning situation. I hope that by sharing all these, we all will become better in what we do. And on the way, I hope to save you some sweat :-)
Tuesday, 19 April 2016
Oracle 11g: how to control the frequency of the redo log switches?
Let's assume that we have a request to switch redo logs every 15 minutes, so we could ship the archived logs to a third party application.
I've found a very nice entry about this at the blog below:
http://dba-masters.blogspot.ca/2012/08/11g-r2-managing-redo-logs.html
SQL> ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 900;
System altered.
The value is in seconds, so 900 seconds = 15 minutes.
I've found a very nice entry about this at the blog below:
http://dba-masters.blogspot.ca/2012/08/11g-r2-managing-redo-logs.html
SQL> ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 900;
System altered.
The value is in seconds, so 900 seconds = 15 minutes.
You can force all redo log threads to switch their CURRENT redo logs at regular time intervals. In a Primary / Standby database configuration, changes are made available to the standby database by archiving redo logs at the primary site, and then shipping them across to the Standby site and applying them to the Standby database. The standby database must wait for the redo log files to be archived and shipped across to it before it can apply the latest changes. With the ARCHIVE_LAG_TARGET initialization parameter you can specify in seconds, how long that lag should be.
When ARCHIVE_LAG_TARGET is set, then the database examines the current redo log of the database periodically. If the following conditions are met, then the instance will switch the logfile.
· The current redo log file was created prior to n seconds ago, and the estimated archival time for the current log = m seconds. m seconds is calculated according to the number of redo blocks in the current redo log file. If n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter, we have a yes for a redo log file switch
· The current log contains redo records. A log switch will not occur if there are no records in the current redo log file.
In a RAC(Real Application Clusters) environment, the instance will also cause other threads from other instances to switch and archive their redo log files, if they are falling behind. This is a useful mechanism if one instance in a RAC environment is more active than the other instances. In a RAC environment, the value of ARCHIVE_LAG_TARGET must be the same on all instances, otherwise you can expect unpredictable behaviour.
The ARCHIVE_LAG_TARGET is a dynamic parameter, so you can set it with ALTER SYSTEM … SCOPE=BOTH; to make it permanent. The value of ‘0’ disables it, and is also the default value. A typical value for this initialization parameter is half an hour:
SQL> alter system set ARCHIVE_LOG_TARGET=1800 scope=both;
Wednesday, 30 March 2016
On which object is my query stuck (spending) time? Using v$session_wait and dba_extents
How to find the object name where our query is spending time?
In case the wait event in v$session_wait is "file sequential read", we can use the join below, to identify the index_name:
SQL> select segment_name,segment_type,owner,tablespace_name from
dba_extents,v$session_wait
where file_id=p1
and p2 between block_id and block_id + blocks -1
and sid=4933;
SEGMENT_NAME SEGMENT_TYPE OWNER TABLESPACE_NAME
--------------------------------------------------------------------------------- ------------------ ------------------------------ ------------------------------
TABLE_TEST_4IX INDEX PARTITION MY_USER APL_LARGE_IX
Thursday, 3 March 2016
invoker_rights_clause to the rescue
The issue: we are trying to call a stored procedure defined in a different user, using a synonym, to truncate a table in our account. This is failing, since by default the procedure is ran using the "definer rights".
The solution: define the procedure/package to run using "invoker rights", as below:
CREATE PACKAGE "TRUNC_TAB" AUTHID CURRENT_USER AS
PROCEDURE TRUNCATE_TABLE (i_table_name in varchar2);
PROCEDURE TRUNCATE_PARTITION (i_table_name in varchar2,i_partition_name in varchar2);
END TRUNC_TAB ;
/
The solution: define the procedure/package to run using "invoker rights", as below:
CREATE PACKAGE "TRUNC_TAB" AUTHID CURRENT_USER AS
PROCEDURE TRUNCATE_TABLE (i_table_name in varchar2);
PROCEDURE TRUNCATE_PARTITION (i_table_name in varchar2,i_partition_name in varchar2);
END TRUNC_TAB ;
/
Monday, 22 February 2016
How to get run time statistics for a query?
Step 1:
Add the hint gather_plan_statistics to the select statement:
SELECT /*+ GATHER_PLAN_STATISTICS */
name,address,code
from address_name_table;
Step 2:
Generate the plan and the run time statistics:
select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
Add the hint gather_plan_statistics to the select statement:
SELECT /*+ GATHER_PLAN_STATISTICS */
name,address,code
from address_name_table;
Step 2:
Generate the plan and the run time statistics:
select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
Thursday, 12 November 2015
How to export/import a very big compressed table?
The challenge here is to transfer a very big table from one DB to another, both fast and using as little space as possible. Since the table is compressed in the source DB, we'll need to take advantage of this.
The table will stay compressed throughout the process if we'll use expdp and impdp, with the option compression=ALL.
See example below of par file for expdp:
>cat expdp.par
userid=myuser/mypass
dumpfile=big_dir:big_table_1011.dmp
logfile=big_dir:big_table_1011.exp.log
compression=ALL
To check expdp status:
expdp myuser/mypass attach=SYS_EXPORT_SCHEMA_01
Export>Status
The table will stay compressed throughout the process if we'll use expdp and impdp, with the option compression=ALL.
See example below of par file for expdp:
>cat expdp.par
userid=myuser/mypass
dumpfile=big_dir:big_table_1011.dmp
logfile=big_dir:big_table_1011.exp.log
compression=ALL
To check expdp status:
expdp myuser/mypass attach=SYS_EXPORT_SCHEMA_01
Export>Status
Monday, 19 October 2015
Subscribe to:
Posts (Atom)