Server Administration :: Nls_database_parameters Versus Nls_instance_parameters
Feb 15, 2013
NLS_LANGUAGE and NLS_TERITORY on database and instance level.it make sense for me set this parameter for session and for instance, but why for database? for database the most important params is NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET as i know (beside others) - but why NLS_LANGUAGE and NLS_TERITORY?
at the begining i thought that maybe if i don't have set NLS_LANGUAGE and NLS_TERITORY for instance they are set automatically from nls_database_parameters, but i checked this and it doesn't behave this way.
so, even if i have this two parameters for my instance set as null, after i run my instance this parameters is set to AMERICA_AMERICAN instead of POLISH_POLAND which i have on my database level.
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production PL/SQL Release 11.1.0.7.0 - Production CORE 11.1.0.7.0 Production TNS for Linux: Version 11.1.0.7.0 - Production NLSRTL Version 11.1.0.7.0 - Production
My os version is
Linux damdat01 2.6.18-128.7.1.el5 #1 SMP Wed Aug 19 04:00:49 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
My database is OLP system.
My question is what are the advantages and disadvantages having one single tablespace versus multiple tablespace?
Easy to maintain when you have single tablespace. but hard to track the IO issues if you have one single tablespace.
I have a bit of an issue with Oracle datapump dump files.
Today, I manage the export and import of oracle dump files. As part of the batch export process I have a script which essentially says:
For each schema realated to my application in THIS instance, export schema via the system user (system user allows me privs to all schemas).
On the import UI side of things I am able to run a "head -20" command on the dmp file and determine the "export client version", "date the schema was dumped", and "what schema it was dumped from". All useful info presented in my UI.
in that I allow the importation of production schemas into test schemas, (contained in a different tablespace). Based on naming convention I can determine the schema type (production or test). Additionally and probably most importantly, I am assured where the data has come from.
In looking at "expdp" and the dump file. Using the same method as above, it appears the data pump dump DOES NOT carry similar headers. Because of this, I am unable to return very little useful info from the dump file.
I realize I could run the impdp with the "sqlfile=myfile.sql" and then interrogate the sql file for the info. But on large dump files this would be fairly time consuming compared to a "head -20" on a dump file.
In the article regarding gathering CBO Statistics, it states: QUOTE When an Oracle database is created, a job will be scheduled that will generate the database statistics for you. You will still need to collect system statistics however, as these are not collected by the automatic statistics gathering mechanism.
what is the difference between "database statistics" and "system statistics"? In other words, do I need to run this script for each schema owner in my 10g/11g instance?
variable whoami varchar2(20); begin select user into :whoami from dual; end; exec dbms_stats.gather_schema_stats( - ownname => :whoami, - options => 'GATHER AUTO', - estimate_percent => 15, - cascade => true).
about the functionalty w.r.t. unique constraint and Distinct clause. Below is the example which is confusing me lot.
--Below statement will create table and unique constraint Create Table A (A Varchar2 (10) Unique); Insert Into A Values (Null); Insert Into A Values (1); Insert Into A Values (2);
[code]...
If we are saying each null value is having a unique value, then why oracle distinct showing records.
What's the difference between a dirty buffer and a redo buffer?
My understanding is that a dirty buffer is a changed buffer or whenever data changes in the buffer cache, it's marked as dirty. Also, a redo buffer keeps track of changes that were made to the data, so it's also referring to changed data as well...DWBn writes dirty buffers to disk and LGWR writes redo data to redo log filesHow can we differentiate between the two?
We had an issue with a PL/SQL package taking hours to run as a concurrent program. Database version is 10.2.0.4.0, running on Linux x86 64-bit. A tkprof'd trace file revealed the problem SQL statement to be a cursor. This one SQL statement would run for 3+ hours. I copied the SQL statement and ran it in TOAD and it completed in seconds, returning the exact same result set. To resolve the issue in the PL/SQL package I created a global temp table and ran the exact same SQL statement as an INSERT into the global temp table.
Again, instead of hours, the SQL statement completes in seconds. If I revert the change, it goes back to taking hours. I've attached the relevant sections of the tkprof showing the two SQL statements (identical other than the insert in front of one) and the resulting explain plans and performance data. I've always been under the impression that a cursor was a better option than a temp table and I've never run into a situation where the same SQL statement runs so much longer when executed as a cursor.
Attached File(s)
SQL_As_Cursor.jpg ( 274.02K ) Number of downloads: 7
Explain_for_SQL_As_Cursor.jpg ( 189.43K ) Number of downloads: 4
SQL_as_Insert.jpg ( 277.38K ) Number of downloads: 4
Explain_for_SQL_As_Insert.jpg ( 180.66K ) Number of downloads: 2
Oracle Server 11g on HP-UX Oracle Client on Windows
I am using swingbench tool to generate load on DB and using OLTP like benchmark i am comparing the performance of plain data and encrypted data.
I have created two different database. one for tde and other for plain. I have populated same number of rows in both databases. Then i start running the benchmark and i use SAR to collect disk I/O's, VSAR to CPU usage.
From the sar report it seems that,
Oracle plain has faster transactions, it uses minimum CPU. But when look in tot the Reads/Writes TDE has lower than the plain.
If TDE needs to encrypt the data to store in the disks it should occupy more space than the plain data. Then the I/O should be more in TDE..
Note: Bcz the DB parameters are same, number of rows in the tables are same. File system and its block size are same. I will run the swingbench seperately for both the databases.
I am attaching the excel sheet for sar results. Let me know if you need more information
I want to move data between two instances and recommended we create a local database link to PULL data from remote database located here (supplier on site) but they want to PUSH data to us. I thought you could only PULL data over a database link but then read the link [URL] where PUSH is considered ? I was going to use standard creatas like create table A as select * from table A@<remote_db_link> which works well and fast ( tried and tested) but some are saying they think PUSH quicker/better ?
we do have data "PUSH" already but this does not use a db link - effectively it calls a local proceedure here and passes a row of data and is slow ie for a 1000 row table to be pushed to us we have our local proceedure called 1000 times.
I have always suggested a PULL with db_link as the fastest method - any proof OR info on a fast PUSH method ( that is quicker than PULL ) ? can you REALLY push ?
With a very large database (VLDB) for a data warehouse (DW) using primarily a STAR based schema in an environment in which time (both human and CPU) is orders of magnitude more valuable than storage capacity, is there any signficant difference in query performance when tables have all fixed length (CHAR) columns compared to tables with variable length (VARCHAR2) columns?
I realize this is one of those "in general" questions so considering "a given VLDB DW environment" with all other things being equal, what, if any, is the time based performance difference between a database of tables with all fixed sized columns versus one of tables with variable length columns ?
A database containing inventory data has been migrated from Oracle 10g to Oracle 11g. I have access to both the Oracle 10g and Oracle 11g database on different client computers. Both databases use the same character set, WE9MSWIN1252 (query shown below). However the results from the sql SELECT show incorrectly displayed characters. I would like the "1/2" character and degree character to show in the text. The ASCIISTR function shows that the underlying ascii is the same in the two copies of the databases.
Is there a setting that needs to be changed in Oracle 11g so that the saved special characters in the database show correctly (as in Oracle 10).
Query of database character set
SQL> Select value from SYS.NLS_DATABASE_PARAMETERS where PARAMETER = 'NLS_CHARACTERSET' WE8MSWIN1252
Under Oracle 11g, this is a query on DSI using SQLPLUS 11.2.0.1.0.
SQL> select description from part where id = '57234';
DESCRIPTION ---------------------------------------- KL BRKT PLN 22╜░ ANGLE (AMER BOT RAIL) SQL> select asciistr(description) from part where id='57234'; ASCIISTR(DESCRIPTION) --------------------------------------------------------------------------------
we are running a front end application on classic asp and we are using microsoft ole db version to connect to oracle 9i database.Now as the users are increasing daily, the application performance is degrading day by day.
my question is will oracle ole db increase the performance of my front end application. and is it possible for me to migrate from microsoft oledb to oracle oledb without much changes in the application.
I have a particular sql code which works perfectly fine on sql developer. But if I run the same sql code through a batch file it does not get executed. It does not throw an error too.
SQL code - clean_tables.sql
begin execute immediate 'drop table external_tables'; execute immediate 'drop table security'; exception when others then null; end;
Batch file - Clean.bat
set ORACLE_SID=orcl set ORACLE_HOME=C:oracleproduct11.2.0dbhome_1 set PATH=C:oracleproduct11.2.0dbhome_1BIN
First, I'm aware that the equals (=) operator is a "comparison operator compares two values for equality." In other words, in an SQL statement, it won't return true unless both sides of the equation are equal. For example:
SELECT * FROM Store WHERE Quantity = 200; The LIKE operator "implements a pattern match comparison" that attempts to match "a string value against a pattern string containing wild-card characters."
For example:
SELECT * FROM Employees WHERE Name LIKE 'Chris%';
Here,I query about date type data on ORACLE database, I found the following, when I write select statment in this way:
SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE FROM ACCOUNT WHERE ACCOUNT.LAST_TRANSACTION_DATE LIKE '30-JUL-07';
I get all rows I'm looking for. but when I use the sign equal =
instead :SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE FROM ACCOUNT WHERE ACCOUNT.LAST_TRANSACTION_DATE = '30-JUL-07';
I get nothing even though nothing is different except the equal sign.
DECLARE v_seq_num NUMBER; BEGIN SELECT SEQ_ID.NEXTVAL INTO v_seq_num FROM DUAL; INSERT INTO TABLEA (COL1, COL2) VALUES (v_seq_num, 'test'); INSERT INTO TABLEB (COL3) VALUES (v_seq_num); END;
-- Option2 - Using sequence.NEXTVAL in INSERT USING RETURNING INTO clause
DECLARE v_seq_num NUMBER; BEGIN INSERT INTO TABLEA (COL1, COL2) VALUES (SEQ_ID.NEXTVAL, 'test') RETURNING COL1 INTO v_seq_num; INSERT INTO TABLEB (COL3) VALUES (v_seq_num); END;
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
If you have 3 tables (yr09, yr10,yr11) one with 2009 data, 2010 and 2011 data respectively. And a view (vw_yr091011) with a "union all" on all three.
Question: Will the performance be same for the following two queries ?
Question: Will Oracle read all 3 tables in the view when we search for only one year ?
select count(*) from yr09 where year = 2009;
-- vs
select count(*) from vw_yr091011 where year = 2009;
The following link says yes, the performance remains the same.
Link: [URL]..........
when I tried on a volume of 14000 records. The count came out same but the view took 50 more sec. And the explain plan shows it accessed all three tables.
Among Hibernate optimistic locking & Database Isolation levels, which one to use? Which gives the better consistency, concurrency and scalability. I read in couple of links that Isolation level will suffer if there is a huge load on the application with multiple users access the appliation at the same time, moreover in islation levels normally we need to look for READ_COMITTED and NON_REPEATABLE_READ to get a better performance? Whether these are true? Whether we can use both Hibernate optimistic locking(version & timestamp) & Database Isolation levels in the same application? What are the implications using these? Which one will be preferred over the other and when?