Server Administration :: Nls_database_parameters Versus Nls_instance_parameters

Feb 15, 2013

NLS_LANGUAGE and NLS_TERITORY on database and instance level.it make sense for me set this parameter for session and for instance, but why for database? for database the most important params is NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET as i know (beside others) - but why NLS_LANGUAGE and NLS_TERITORY?

at the begining i thought that maybe if i don't have set NLS_LANGUAGE and NLS_TERITORY for instance they are set automatically from nls_database_parameters, but i checked this and it doesn't behave this way.

so, even if i have this two parameters for my instance set as null, after i run my instance this parameters is set to AMERICA_AMERICAN instead of POLISH_POLAND which i have on my database level.

View 12 Replies


ADVERTISEMENT

Server Administration :: Single Big Tablespace Versus Multiple Tablespace?

Jan 26, 2011

My database version is

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

My os version is

Linux damdat01 2.6.18-128.7.1.el5 #1 SMP Wed
Aug 19 04:00:49 EDT 2009 x86_64 x86_64 x86_64
GNU/Linux

My database is OLP system.

My question is what are the advantages and disadvantages having one single tablespace versus multiple tablespace?

Easy to maintain when you have single tablespace. but hard to track the IO issues if you have one single tablespace.

View 7 Replies View Related

Server Administration :: One Schema Versus Multiple Schema

Feb 1, 2010

single schema setup or multiple schema setups for an application development. Which option is recommended and pros and cons of these methods?

View 4 Replies View Related

Server Utilities :: EXP Versus IMPDP Interrogating Dump File

Jul 16, 2010

I have a bit of an issue with Oracle datapump dump files.

Today, I manage the export and import of oracle dump files. As part of the batch export process I have a script which essentially says:

For each schema realated to my application in THIS instance, export schema via the system user (system user allows me privs to all schemas).

On the import UI side of things I am able to run a "head -20" command on the dmp file and determine the "export client version", "date the schema was dumped", and "what schema it was dumped from". All useful info presented in my UI.

Sample output: Begin

EXPORT:V09.02.00
DSYSTEM
RUSERS
8192
Wed Jun 30 11:51:21 UserXXX.dmp
#C##
#C##

[code]....

Sample output: End

in that I allow the importation of production schemas into test schemas, (contained in a different tablespace). Based on naming convention I can determine the schema type (production or test). Additionally and probably most importantly, I am assured where the data has come from.

In looking at "expdp" and the dump file. Using the same method as above, it appears the data pump dump DOES NOT carry similar headers. Because of this, I am unable to return very little useful info from the dump file.

I realize I could run the impdp with the "sqlfile=myfile.sql" and then interrogate the sql file for the info. But on large dump files this would be fairly time consuming compared to a "head -20" on a dump file.

View 4 Replies View Related

SQL & PL/SQL :: Procedure Versus Function?

Dec 27, 2011

Procedure and function. exact reason when we go for function or procedure?

View 3 Replies View Related

Database Versus System Statistics

Aug 26, 2011

In the article regarding gathering CBO Statistics, it states: QUOTE When an Oracle database is created, a job will be scheduled that will generate the database statistics for you. You will still need to collect system statistics however, as these are not collected by the automatic statistics gathering mechanism.

what is the difference between "database statistics" and "system statistics"? In other words, do I need to run this script for each schema owner in my 10g/11g instance?

variable whoami varchar2(20);
begin
select user into :whoami from dual;
end;
exec dbms_stats.gather_schema_stats( -
ownname => :whoami, -
options => 'GATHER AUTO', -
estimate_percent => 15, -
cascade => true).

View 2 Replies View Related

Column Length Versus Size

Oct 6, 2011

if one of the columns is given as

ABC varchar2(10)

the size of the data in bytes that this column going to hold.

View 5 Replies View Related

SQL & PL/SQL :: Unique Constraint Versus Distinct?

Apr 30, 2013

about the functionalty w.r.t. unique constraint and Distinct clause. Below is the example which is confusing me lot.

--Below statement will create table and unique constraint
Create Table A (A Varchar2 (10) Unique);
Insert Into A Values (Null);
Insert Into A Values (1);
Insert Into A Values (2);

[code]...

If we are saying each null value is having a unique value, then why oracle distinct showing records.

View 3 Replies View Related

Dirty Versus Redo Buffer?

Mar 3, 2010

What's the difference between a dirty buffer and a redo buffer?

My understanding is that a dirty buffer is a changed buffer or whenever data changes in the buffer cache, it's marked as dirty. Also, a redo buffer keeps track of changes that were made to the data, so it's also referring to changed data as well...DWBn writes dirty buffers to disk and LGWR writes redo data to redo log filesHow can we differentiate between the two?

View 2 Replies View Related

Incremental Versus Differential Backup?

Jan 18, 2013

what is the difference between incremental and differential backup?

View 5 Replies View Related

Server Administration :: Create Tablespace For Administration

Nov 29, 2010

i'm a student currently learning database administration security.

I need to create a tablespace for administration of database but i don't know what datafile settings are best suited for admin usage.

I have attached the schema that was given to me for this assignment.

View 12 Replies View Related

Cursor Versus Global Temp Table

Jan 16, 2013

We had an issue with a PL/SQL package taking hours to run as a concurrent program. Database version is 10.2.0.4.0, running on Linux x86 64-bit. A tkprof'd trace file revealed the problem SQL statement to be a cursor. This one SQL statement would run for 3+ hours. I copied the SQL statement and ran it in TOAD and it completed in seconds, returning the exact same result set. To resolve the issue in the PL/SQL package I created a global temp table and ran the exact same SQL statement as an INSERT into the global temp table.

Again, instead of hours, the SQL statement completes in seconds. If I revert the change, it goes back to taking hours. I've attached the relevant sections of the tkprof showing the two SQL statements (identical other than the insert in front of one) and the resulting explain plans and performance data. I've always been under the impression that a cursor was a better option than a temp table and I've never run into a situation where the same SQL statement runs so much longer when executed as a cursor.

Attached File(s)

SQL_As_Cursor.jpg ( 274.02K )
Number of downloads: 7

Explain_for_SQL_As_Cursor.jpg ( 189.43K )
Number of downloads: 4

SQL_as_Insert.jpg ( 277.38K )
Number of downloads: 4

Explain_for_SQL_As_Insert.jpg ( 180.66K )
Number of downloads: 2

View 2 Replies View Related

Performance Comparison TDE Versus Plain Tablespace

Dec 9, 2008

Environment Setup

Oracle Server 11g on HP-UX
Oracle Client on Windows

I am using swingbench tool to generate load on DB and using OLTP like benchmark i am comparing the performance of plain data and encrypted data.

I have created two different database. one for tde and other for plain. I have populated same number of rows in both databases. Then i start running the benchmark and i use SAR to collect disk I/O's, VSAR to CPU usage.

From the sar report it seems that,

Oracle plain has faster transactions, it uses minimum CPU. But when look in tot the Reads/Writes TDE has lower than the plain.

If TDE needs to encrypt the data to store in the disks it should occupy more space than the plain data. Then the I/O should be more in TDE..

Note: Bcz the DB parameters are same, number of rows in the tables are same. File system and its block size are same. I will run the swingbench seperately for both the databases.

I am attaching the excel sheet for sar results. Let me know if you need more information

View 7 Replies View Related

PUSH Versus PULL Tables Between Two Instances?

Oct 19, 2010

I want to move data between two instances and recommended we create a local database link to PULL data from remote database located here (supplier on site) but they want to PUSH data to us. I thought you could only PULL data over a database link but then read the link [URL] where PUSH is considered ? I was going to use standard creatas like create table A as select * from table A@<remote_db_link> which works well and fast ( tried and tested) but some are saying they think PUSH quicker/better ?

we do have data "PUSH" already but this does not use a db link - effectively it calls a local proceedure here and passes a row of data and is slow ie for a 1000 row table to be pushed to us we have our local proceedure called 1000 times.

I have always suggested a PULL with db_link as the fastest method - any proof OR info on a fast PUSH method ( that is quicker than PULL ) ? can you REALLY push ?

View 2 Replies View Related

Performance Of CHAR Versus VARCHAR2 In VLDB DW

Jul 20, 2010

With a very large database (VLDB) for a data warehouse (DW) using primarily a STAR based schema in an environment in which time (both human and CPU) is orders of magnitude more valuable than storage capacity, is there any signficant difference in query performance when tables have all fixed length (CHAR) columns compared to tables with variable length (VARCHAR2) columns?

I realize this is one of those "in general" questions so considering "a given VLDB DW environment" with all other things being equal, what, if any, is the time based performance difference between a database of tables with all fixed sized columns versus one of tables with variable length columns ?

View 2 Replies View Related

SQL & PL/SQL :: Different Special Character Display Oracle 10 Versus 11g?

Sep 17, 2012

A database containing inventory data has been migrated from Oracle 10g to Oracle 11g. I have access to both the Oracle 10g and Oracle 11g database on different client computers. Both databases use the same character set, WE9MSWIN1252 (query shown below). However the results from the sql SELECT show incorrectly displayed characters. I would like the "1/2" character and degree character to show in the text. The ASCIISTR function shows that the underlying ascii is the same in the two copies of the databases.

Is there a setting that needs to be changed in Oracle 11g so that the saved special characters in the database show correctly (as in Oracle 10).

Query of database character set

SQL> Select value from SYS.NLS_DATABASE_PARAMETERS where PARAMETER = 'NLS_CHARACTERSET'
WE8MSWIN1252

Under Oracle 11g, this is a query on DSI using SQLPLUS 11.2.0.1.0.

SQL> select description from part where id = '57234';

DESCRIPTION
----------------------------------------
KL BRKT PLN 22╜░ ANGLE (AMER BOT RAIL)
SQL> select asciistr(description) from part where id='57234';
ASCIISTR(DESCRIPTION)
--------------------------------------------------------------------------------

[code].....

View 6 Replies View Related

SQL & PL/SQL :: Select From Dual Versus Equals Operator?

Mar 14, 2011

I have a package function which is wrapped and I cannot see the code.The package function raises an user-defined exception when :

SELECT ABC.*
FROM ABC
WHERE ABC.A = PACK.FUNC(ABC.B,ABC.C)

But it does not raise any exception and the query works absolutely fine generating desired results when :

SELECT ABC.*
FROM ABC
WHERE ABC.A = (SELECT PACK.FUNC(ABC.B,ABC.C) FROM DUAL)

View 6 Replies View Related

Windows :: Oracle Versus Microsoft OLEDB

Sep 2, 2011

we are running a front end application on classic asp and we are using microsoft ole db version to connect to oracle 9i database.Now as the users are increasing daily, the application performance is degrading day by day.

my question is will oracle ole db increase the performance of my front end application. and is it possible for me to migrate from microsoft oledb to oracle oledb without much changes in the application.

View 1 Replies View Related

Database Control Versus Enterprise Manager

Mar 11, 2012

I always thought that Database control and Enterprise manager was synonyms...But I am reading a mock about OCA exam and there it said:

QUOTE You just can apply the pacth binaries using the Database Control and with the oPatch utility, but not with the Enterprise Manager...

But to me Database Control and Enterprise Manager are the same thing...

Are there difference between them?

View 1 Replies View Related

Windows :: Batch File Versus SQL Developer

Aug 23, 2012

I have a particular sql code which works perfectly fine on sql developer. But if I run the same sql code through a batch file it does not get executed. It does not throw an error too.

SQL code - clean_tables.sql

begin
execute immediate 'drop table external_tables';
execute immediate 'drop table security';
exception
when others then
null;
end;

Batch file - Clean.bat

set ORACLE_SID=orcl
set ORACLE_HOME=C:oracleproduct11.2.0dbhome_1
set PATH=C:oracleproduct11.2.0dbhome_1BIN

sqlplus security/password@orcl <c:Reportclean_tables.sql

pause

View 2 Replies View Related

PL/SQL :: Equals (=) Versus LIKE For Date Data Type

Sep 2, 2013

First, I'm aware that the equals (=) operator is a "comparison operator compares two values for equality."  In other words, in an SQL statement, it won't return true unless both sides of the equation are equal.  For example:

SELECT * FROM Store WHERE Quantity = 200; The LIKE operator "implements a pattern match comparison" that attempts to match "a string value against a pattern string containing wild-card characters." 

For example:

SELECT * FROM Employees WHERE Name LIKE 'Chris%'; 

Here,I query about date type data on ORACLE database, I found the following, when I write select statment in this way:

SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE FROM ACCOUNT WHERE ACCOUNT.LAST_TRANSACTION_DATE LIKE '30-JUL-07';

I get all rows I'm looking for. but when I use the sign equal =

instead :SELECT ACCOUNT.ACCOUNT_ID, ACCOUNT.LAST_TRANSACTION_DATE FROM ACCOUNT WHERE ACCOUNT.LAST_TRANSACTION_DATE = '30-JUL-07';

I get nothing even though nothing is different except the equal sign.

View 6 Replies View Related

Performance Tuning :: Create Versus Rebuild Index

Jan 27, 2011

I was comparing cost of rebuild vs create index...I carried out the following test

SQL> create table t4 as select * from t1;

Table created.

SQL> create table t5 as select * from t1 where 1=2;

Table created.

SQL> create index i5 on t5(id);

Index created. SQL> select bytes,extents,blocks from user_segments where segment_name='I5';

BYTES EXTENTS BLOCKS
---------- ---------- ----------
65536 1 8

SQL> alter index i5 unusable;

Index altered.

SQL> alter table t5 nologging;

Table altered.

SQL> Alter session set skip_unusable_indexes=True;

Session altered.

SQL> insert /*+ append */ into t5 select * from t1;

563904 rows created.

SQL> commit;

Commit complete.

Now I compared the cost (elapsed time, logical I/O) of the operations

create index i4 on t4(id);
Vs
alter index i5 rebuild online;

Following is the related trace of above 2 steps

create index i4 on t4(id)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 1.17 3.38 9497 7869 335 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.17 3.38 9497 7870 335 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 5
[code]....

So which option we shall pick in such cases? {Of course I haven't set 'nologging' for the indices but it is same for both indices we are comparing}

View 2 Replies View Related

Backup & Recovery :: Tablespace Versus Schema Export

Aug 22, 2012

I have taken the tablespace export it came 2.1gb and for the same user i have taken the schema export it came 5.1gb

why their is a lot of difference in size?

View 3 Replies View Related

SQL & PL/SQL :: Using Sequence.NEXTVAL From DUAL Versus In INSERT Statement?

Apr 8, 2013

I am trying to understand the difference between using sequence.NEXTVAL from DUAL as against using it direclty in an INSERT statment.

--Sequence Creation
CREATE SEQUENCE SEQ_ID START WITH 1 MINVALUE 1 NOCYCLE CACHE 500 NOORDER;
--Table1 Creation
Create table TABLEA (COL1 number, COL2 varchar2(10),
constraint COL1_PL primary key (COL1));
--Table2 Creation
Create table TABLEB(COL3 number);
alter table TABLEB add constraint COL1_FK foreign key(COL3) references TABLEA(COL1);

-- Option1 - Using sequence.NEXTVAL from DUAL

DECLARE
v_seq_num NUMBER;
BEGIN
SELECT SEQ_ID.NEXTVAL INTO v_seq_num FROM DUAL;
INSERT INTO TABLEA (COL1, COL2) VALUES (v_seq_num, 'test');
INSERT INTO TABLEB (COL3) VALUES (v_seq_num);
END;

-- Option2 - Using sequence.NEXTVAL in INSERT USING RETURNING INTO clause

DECLARE
v_seq_num NUMBER;
BEGIN
INSERT INTO TABLEA (COL1, COL2) VALUES (SEQ_ID.NEXTVAL, 'test') RETURNING COL1 INTO v_seq_num;
INSERT INTO TABLEB (COL3) VALUES (v_seq_num);
END;

View 9 Replies View Related

Forms :: Post-change Versus Validate Item

Oct 20, 2010

tell me excat diff. between when- validate -item vs. post-change in a simple way...

View 4 Replies View Related

RMAN :: Backup Sets Versus Image Copies

Feb 5, 2013

I am trying to understand What is the advantage of RMAN backup image copies over backup sets? ( image copies to the disk under FRA location )

Database version: 11.2.0.3

View 2 Replies View Related

Performance Tuning :: Oracle Buffer Versus AIX Filesystem Cache

May 7, 2011

I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).

I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.

I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.

View 4 Replies View Related

Performance Tuning :: Access Direct Table Versus View

Dec 9, 2011

If you have 3 tables (yr09, yr10,yr11) one with 2009 data, 2010 and 2011 data respectively. And a view (vw_yr091011) with a "union all" on all three.

Question: Will the performance be same for the following two queries ?

Question: Will Oracle read all 3 tables in the view when we search for only one year ?

select count(*) from yr09
where year = 2009;

-- vs

select count(*) from vw_yr091011
where year = 2009;

The following link says yes, the performance remains the same.

Link: [URL]..........

when I tried on a volume of 14000 records. The count came out same but the view took 50 more sec. And the explain plan shows it accessed all three tables.

View 9 Replies View Related

Hibernate Optimistic Locking Versus Database Isolation Levels

Jan 26, 2013

Among Hibernate optimistic locking & Database Isolation levels, which one to use? Which gives the better consistency, concurrency and scalability. I read in couple of links that Isolation level will suffer if there is a huge load on the application with multiple users access the appliation at the same time, moreover in islation levels normally we need to look for READ_COMITTED and NON_REPEATABLE_READ to get a better performance? Whether these are true? Whether we can use both Hibernate optimistic locking(version & timestamp) & Database Isolation levels in the same application? What are the implications using these? Which one will be preferred over the other and when?

View 2 Replies View Related

Updating Table In Session - Shared Versus Exclusive Lock

Jan 28, 2013

I have a question. If we have two scott sessions. I am updating a table EMP in session

1. It means it is exclusively locked.It cannot be used by session 2. Then can we use select command on table EMP in session

2.? This command should not work according to me. But it is working.

View 14 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved