Server Utilities :: SELECT On External Is Very Slow?

Aug 10, 2013

I just did a 112G file migration of production data using oracle_datapump so I know this works in principle. When I tried it on my test instance I am seeing stuff like this

[oracle@aggs00.test for_test]$ ls -l aggs_day_conversion_agg_2419
-rw-r----- 1 oracle oracle 15917056 Aug 10 09:06 aggs_day_conversion_agg_2419
CREATE TABLE IMP_3251198_2419(
PARTITION_DATE DATE,
USER_ID NUMBER,
SID NUMBER,

[code]....

Executed in 1800.642 seconds

why it could be taking 1800 seconds to select one record from a not very big table? File corruption? Disc fragmentation? Oracle instance configuration?

View 29 Replies


ADVERTISEMENT

Server Utilities :: Slow Expdp After Upgrade

Oct 1, 2012

We had AIX OS on 570 machine and database 10.2.0.4. We took expdp and it took 2 and hour to complete every night.

Now we upgrade to 10.2.0.5 and 770 machine and now same command takes 6 hours to complete even database and hardware is upgraded

Command is

expdp T24SILK/oracle directory=backup dumpfile=exp_beod_T24_%U_$dt
.dmp logfile=exp_T24_$dt.log EXCLUDE=TABLE:"LIKE '%TRACE'" parallel=6

View 1 Replies View Related

Server Utilities :: Slow Exports Using Exp Command Due To Open Sessions

Nov 11, 2011

I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.

When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.

I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.

From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).

How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.

FAST:
elapsed_time: 430.90
executions: 161,388
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO

elapsed_time: 264.29
executions: 50,349
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID

SLOW:
elapse_time: 8264.16
executions: 124,662
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO

elapsed_time: 3877.78
executions: 38,813
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID

I use the following export command for each schema:
$ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000

I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.

The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.

View 2 Replies View Related

Server Utilities :: How To Get Proper Value In External Table

May 3, 2012

getting proper value from the file in external table.

How can I get the whole status in STATUS column like completed , Inprogress, incompleted.
Right now, if I gave position like (38:9) full status doesn't show. if I give (38:11) then '|1' is adding in status from the flat file.

BATCH_NO FILE_DATEEMP_ID COMPANY_ID TRANSACTIN_ID FILE_NAME STATUS DOC_NO
10000104252012100001***4252012**1:35:57***D100001***04252012***10:35:57***Diverified

[Code].....

View 3 Replies View Related

Server Utilities :: How To Create External Table

Aug 10, 2012

im trying to create an external table, and i load my data without no problem, and everything is fine, but i got some behavior with one column that i would like to know whats behind scenes, OK let's get the example:

[*] Sample Data
Line 1:333 1111111112009100000000000080000000013450.33
Line 2:11111111111220091016000000004.48
Line 3:222222222 220091016000000004.48
Line 4:(This is a blank line left)

And this is my External Table Create Query:

CREATE TABLE EXT_TABLE_TEMP
(COL_A VARCHAR2(11),
COL_B VARCHAR2(1),
COL_C DATE,
COL_D NUMBER(12,2))
ORGANIZATION EXTERNAL

[code]....

As you can see i can upload my table with no problem but i always get 3 lines counting last blank line if i try LOAD WHEN COL_A != BLANKS, i dont know if its a problem of the blank space left between fixed fields length, but if i do LOAD WHEN COL_B != BLANKS i get correct result 2 lines instead of 3, i want to know why (missing fields...) and (reject rows...) are not working...

Note: COL_A could be 9-11 length, if length its 9 then 2 spaces left before next one...

View 4 Replies View Related

Server Utilities :: Skip Last Line In External Table

Jun 15, 2012

how to skip last record while loading external table.

the example below I don't want to load the 000000005 in the AAA external table.

My Data is

100001***04252012***06:02:40***CignaGlobalHealthBenefits
201441424_7076551_OLC_1234567899.aaa
201441424_7075703_OLC_3456789134.aaa
201442669_7075775_RIE_5432167891.aaa
700223567_7077646_ECS_2345678912.aaa
700331352_7078197_RIE_5678901234.aaa
000000005

[Code]...

View 4 Replies View Related

Server Utilities :: SQL Loader Or External Table With A Trigger?

Aug 14, 2013

I have to have a sequence added to a large(288 million rows) file when I load the file into the table. If I use SQL Loader I can't use direct since I have a trigger for each row for the sequence but I am not sure if an external table will be any faster since the trigger will be firing for each row also. In this scenario is one better than the other ?

View 8 Replies View Related

Server Utilities :: Difference Between Sqlloader And External Tables?

Feb 9, 2011

I would like to know which of the above is faster for the same conditions.

i.e. If I am loading 1 million rows for the same conditions which will perform faster?

View 9 Replies View Related

Server Utilities :: Views Linked To External Tables

Mar 28, 2011

i just posted another topic where i heard about external table and i had a few questions concerning them. I thought it was best to create a new topic than to continue on the other one...

I noticed that to create an external table the CTL is like this:
CREATE TABLE emp_load (FIELDS description)
ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
ACCESS PARAMETERS (RECORDS FIXED 62 FIELDS (employee_number CHAR(2),

[Code]...

1) This creates an external table, but, is it possible to Create a normal table in a CTL file? For physical tables, the table has to exist right?

2) if you create a view linked to 2 external tables and if the CSV files are updated each day, the external tables will be updated automatically, and the view will be updated as well?

3) Can't there be any synchronisation problems?

4) What happens if a select request (or someone requests on the view) while the CSV file is being updated?

5) Is there anyway you can protect the accesses from those tables/views when the CSVs are being updated?

6) Is it possible to create an index on these sort of tables?

7) Is it possible to index a view?

8) Are external tables visible on a tool like sql developper?

View 11 Replies View Related

Server Utilities :: External Table - Data Cartridge Error

Aug 6, 2010

i created the External Table using the script below.

CREATE TABLE EXT_ST_FINANCEIRO_REAL (
DT_DATA NUMBER,
TIPO NUMBER,
ENTIDADE NUMBER,
VALOR Varchar2(40))
[code]....

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "missing" expecting on of: "column, exit,("
KUP-01007: at line 6 column 1
ORA-06512: at "SYS.ORACLE_LOADER", line 19

View 3 Replies View Related

Server Utilities :: Getting Localconfig Is Not Recognized As Internal Or External Command

Dec 30, 2010

While i like to start CSS service to create new ASM instance in my own pc for testing purpose gettting the below errors "'localconfig' is not recognized as an internal or external command, operable program or batch file.".

View 1 Replies View Related

Server Utilities :: External Table Definition And Content Of CSV File

May 26, 2010

Below is the external table definition and the content of csv file, why the first record fails ?

0546-0*LB-CRP*16*"Tech", ZAO*29-DEC-2009*29-DEC-2010***A051453*RU*29-DEC-2009*0***21-MAY-2010*21-MAY-2010 --FAILED
0546-0*LB-CRP*16*ID"Tech", ZAO*29-DEC-2009*29-DEC-2010***A051453*RU*29-DEC-2009*0***21-MAY-2010*21-MAY-2010 -- SUCCESS
0546-0*LB-CRP*16*"Tech, ZAO"*29-DEC-2009*29-DEC-2010***A051453*RU*29-DEC-2009*0***21-MAY-2010*21-MAY-2010 -- SUCCESS
[code]....

View 8 Replies View Related

Server Utilities :: Possible To Configure Auditing To Get Number Of Rows Returned By Select Statements

Aug 22, 2011

I am importing some data from Oracle into another database on a regular basis. It works fine for most of the queries but couple of queries don't work sometimes (random). I don't get any errors or any data.

We switched on the Oracle auditing to find out the queries being sent to oracle db. We can see all the queries in the Audit log. Is it possible to configure Auditing to get the "Number of Rows" returned by Select statements so that we can be sure that some data was returned.

View 8 Replies View Related

Slow Locally On Supersized Server

Apr 27, 2012

i have a nightly import ( about 20 tables ) and it takes up to 5 hours..we have one table of about 800,000 lines and the rest are between 1000 and 200,000 this is very slow when i monitor the import i see a very long amount of wait for the SQLnet from client ,

i run the import on the Database server itself .. if i check the current statement i see it's moving from one to one for instance i have

SELECT /*+ all_rows ordered */
"A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001329497'
FROM "REPORT"."CONTRACT_LVL" "A"
WHERE NOT (LENGTH (bonus_nat) <= 31)
then
SELECT /*+ all_rows ordered */
"A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001684584'
FROM "REPORT"."CONTRACT_LVL" "A"
WHERE NOT (LENGTH (outcome_cd) <= 1)

etc and it takes hours DB is on windows 2003 runnin oracle RDBMS 9.2.0.7 while the import screen show 185000 lines imported..I also see a lot of consistent gets for this sessions raising at that time..Would it be better to export import without statistics ?

I need also to mention that the dump file comes from a linux hosted Database don't think it will make the difference for a exp/imp.It's a peoplesoft Database there are a lot of tables more than 15000 and if i take the table mentioned above and i want to check its constraints it takes decade before toad can display them.I have seen that we have a incredible amount of constraints on those tables it might be the reason .

I just wonder if the system catalog needs to be tuned ? /* Update */ why but now the huge number of wait is no as "Library cache lock".

View 7 Replies View Related

DB Performance Is Slow Using Oracle 11gR2 On Windows Server

May 25, 2013

I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.

SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count
FROM sys.dba_audit_session
WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')

View 1 Replies View Related

Server Administration :: SLOW Login As User To DB Via Sqlplus?

Mar 2, 2012

I have a situation where when I login as a user to my DBvia sqlplus no service name it takes about 20 secs to connect.Yet when I login as a user with DBA privs it logs in immediately.

Is there something I can do to trace what is happneing behind scences to determine what the login delay may be..

View 9 Replies View Related

Performance Tuning :: DB Import Too Slow After Server Reboot

Jul 4, 2011

Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete

How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?

Here is my server brief info:

RAM is 16GB,
SWAP size is 16GB,
CPU 12 cores

SQL> show sga;

Total System Global Area 4294967296 bytes
Fixed Size 1984144 bytes
Variable Size 369105264 bytes
Database Buffers 3909091328 bytes
Redo Buffers 14786560 bytes

View 11 Replies View Related

CPU Usage Is Ok On The Server - But Database Is Slow With High CPU Wait?

Jan 26, 2013

I have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.

View 2 Replies View Related

Server Administration :: Partitioned Tables For 30+ GB Table Which Load Very Slow In Front End GUI

May 13, 2013

that are my biggest tables

Users use front end (called ESS Console) and when they try to open one of those tables they wait very long (really bad performance). Sometimes the GUI even hanging without displaying results.

Does Partitioned Tables feature works for better performance?

View 3 Replies View Related

Server Utilities :: Finding Versions Of Exp And Imp Utilities Of Database Server?

Feb 23, 2012

how to find the versions of exp and imp utilities of database server from windows command prompt?

Note: Currently i have 10.2.0.10 oracle software installed on my local machine.

View 4 Replies View Related

Server Administration :: Remote External Jobs On Windows?

Jun 14, 2011

I cannot run a Scheduler remote external job on Windows. Here's what I'm doing:

--Check that XDB isinstalled:
desc resource_view
--Check that MTS is running
sho parameters dispatcher
sho parameters shared_Server
--Setup an http listening port:
exec dbms_xdb.sethttpport(8080)

[code].....

This is the error:

orcl> select job_name, status, error#, actual_start_date, additional_info
2 from user_scheduler_job_run_details where job_name='TRYIT';
TRYIT
FAILED -1.074E+09
15-JUN-11 02.16.54.583000 EUROPE/LISBON

[code].....

I am certain that the communications and the credentials are correct, I've tried variants and get different errors. I think the problem is the job action. I've tried running batch files as well as OS commands, same result. THere is nothing useful in the core dump. Is there perhaps a Windows specific technique for running external jobs? Some way of nominating the batch file, or specifying a command interpreter?

View 1 Replies View Related

SQL & PL/SQL :: External Table Query (compare Number Records In File With External Table)

Jan 23, 2013

I have got a procedure that successfully creates an oracle external table and populates it with the contents of a file. This works fine until I have a situation where one of the fields is a VARCHAR2(2) and I try to insert say, a 5 character value. When this happens the record in question does not get populated in the external table (and rightly so), but I could do with working out if there is a discrepancy in the number of records in the file and the number of records that actually make it into the table so I could inform the user that there is a problem.

I have attached the code that creates the external table and populates it.

View 5 Replies View Related

SQL & PL/SQL :: Change External Directory From Server To Local Machine Path

Nov 15, 2012

how change the default directory path from server to our local system directory in external table while loading the data from csv file to table actually my default directory 'abc'(installed oracle server directory) in external tables , now i want to change that default directory to my local(c:Sm(not installed oracle s/w)).

View 1 Replies View Related

Server Utilities :: Do EXPDP Utilities Does Backup At Block Level As What RMAN Is Doing

May 29, 2013

I have one doubt on Expdp & RMAN. Do EXPDP utilities does backup at block level as what RMAN is doing? Which one is faster, expdp or RMAN?

View 16 Replies View Related

Server Utilities :: Intermediate COMMITS When Using Sqlldr Or Imp Utilities

Oct 29, 2013

I want to load lakhs of records into a table. My problem is when after loading the ¼ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?

View 2 Replies View Related

Server Utilities :: Netca And Netmgr Utilities?

Feb 20, 2012

I am familiar with tool Netca. However there is one more utility exist for the same functionatlty which is netmgrI checked with many DBAs for the exact difference, however I did not get the best answer from them. I also have checked in google but not exactly got the difference. list the exact difference between those 2 tools (netca, netmgr)

View 1 Replies View Related

Server Utilities :: How To Execute SQLLDR / Where Data File Reside In Another Server

Mar 24, 2010

How to execute the SQLLDR, where Data File Reside in another Server?

View 1 Replies View Related

Server Utilities :: Exported Data Of One User Importing Into Another Schema At Another Server

Sep 15, 2010

I have exported data of one user an importing into another schema at another server. when i am trying to imoport it is working fine for quite no of imports into tables, but after some time it starts giving me below mention error...

IMP-00008: unrecognized statement in the export file:
<
IMP-00008: unrecognized statement in the export file:
<
IMP-00008: unrecognized statement in the export file:
<‏ے
IMP-00008: unrecognized statement in the export file:
+A
IMP-00008: unrecognized statement in the export file:
[code]...

View 6 Replies View Related

Server Utilities :: How To Access Local File Residing At Client From DB Server

Nov 23, 2011

I have a requirement to read flat text file(around 15000 lines) residing at a client location from DB server and write into a table in One cell.

I tried UTL_FILE and DBMS_LOB but, i am not able to access client location to read the file as it reads path from Oracle Directory.

eg.
my client path is 198.168.1.1 and my DB server is in unix say 192.168.1.10.
file location is: \192.168.1.1shareabc.txt
So I created One Oracle directory as MY_DIR having DIRECTORY_PATH as '\192.168.1.1share'.
But both UTL_FILE and DBMS_LOB is not able to access the file.

Error Message:
-------------
Unable to process CLOB -22288 ~ ORA-22288: file or LOB operation FILEOPEN
failed
No such file or directory

Few Details for reference:
-------------------------
File Location: \192.168.1.1shareabc.txt
Unix DB Server location: 192.168.1.10
Table : Test (filename varchar2(30), Content CLOB)
Oracle Dir: MYDIR
Directory_Path: \192.168.1.1share

View 7 Replies View Related

Server Utilities :: Difference In Character Sets Of Server And Client?

Mar 16, 2011

I've a question regarding difference of character sets, while taking a export(logical backup) of database on directly to server(linux RHEL 2.1 AS) and export on a client (windows xp prof machine, where only a oracle 9i client is installed). On server it seems to fine and okay, but on client node i'm getting following error for almost all tables.

EXP-00091: Exporting questionable statistics.

My question is :

[1] Is it creating any sort of problem, if later on i import the data which was taken from client node.

[2] Why there is a difference(marginal) in dump(.dmp) file size.

[3] Is there any way to overcome it, or it is the natural behave of it. Means not a problem.

[4] If i'm using a long or blob as datatype for some of my table,is they have any problem if i persist like above.

Additional Information about character sets On server node :

Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8ISO8859P1 character set (possible charset conversion)

On client node :

Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses US7ASCII character set (possible charset conversion)

View 7 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved