Server Utilities :: SELECT On External Is Very Slow?
Aug 10, 2013
I just did a 112G file migration of production data using oracle_datapump so I know this works in principle. When I tried it on my test instance I am seeing stuff like this
why it could be taking 1800 seconds to select one record from a not very big table? File corruption? Disc fragmentation? Oracle instance configuration?
I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST: elapsed_time: 430.90 executions: 161,388 Module: exp@Oracle1 (TNS V1-V3) SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL WHERE CNO = :1 ORDER BY COLNO
I use the following export command for each schema: $ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
getting proper value from the file in external table.
How can I get the whole status in STATUS column like completed , Inprogress, incompleted. Right now, if I gave position like (38:9) full status doesn't show. if I give (38:11) then '|1' is adding in status from the flat file.
BATCH_NO FILE_DATEEMP_ID COMPANY_ID TRANSACTIN_ID FILE_NAME STATUS DOC_NO 10000104252012100001***4252012**1:35:57***D100001***04252012***10:35:57***Diverified
im trying to create an external table, and i load my data without no problem, and everything is fine, but i got some behavior with one column that i would like to know whats behind scenes, OK let's get the example:
[*] Sample Data Line 1:333 1111111112009100000000000080000000013450.33 Line 2:11111111111220091016000000004.48 Line 3:222222222 220091016000000004.48 Line 4:(This is a blank line left)
As you can see i can upload my table with no problem but i always get 3 lines counting last blank line if i try LOAD WHEN COL_A != BLANKS, i dont know if its a problem of the blank space left between fixed fields length, but if i do LOAD WHEN COL_B != BLANKS i get correct result 2 lines instead of 3, i want to know why (missing fields...) and (reject rows...) are not working...
Note: COL_A could be 9-11 length, if length its 9 then 2 spaces left before next one...
I have to have a sequence added to a large(288 million rows) file when I load the file into the table. If I use SQL Loader I can't use direct since I have a trigger for each row for the sequence but I am not sure if an external table will be any faster since the trigger will be firing for each row also. In this scenario is one better than the other ?
i just posted another topic where i heard about external table and i had a few questions concerning them. I thought it was best to create a new topic than to continue on the other one...
I noticed that to create an external table the CTL is like this: CREATE TABLE emp_load (FIELDS description) ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir ACCESS PARAMETERS (RECORDS FIXED 62 FIELDS (employee_number CHAR(2),
[Code]...
1) This creates an external table, but, is it possible to Create a normal table in a CTL file? For physical tables, the table has to exist right?
2) if you create a view linked to 2 external tables and if the CSV files are updated each day, the external tables will be updated automatically, and the view will be updated as well?
3) Can't there be any synchronisation problems?
4) What happens if a select request (or someone requests on the view) while the CSV file is being updated?
5) Is there anyway you can protect the accesses from those tables/views when the CSVs are being updated?
6) Is it possible to create an index on these sort of tables?
7) Is it possible to index a view?
8) Are external tables visible on a tool like sql developper?
i created the External Table using the script below.
CREATE TABLE EXT_ST_FINANCEIRO_REAL ( DT_DATA NUMBER, TIPO NUMBER, ENTIDADE NUMBER, VALOR Varchar2(40)) [code]....
ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "missing" expecting on of: "column, exit,(" KUP-01007: at line 6 column 1 ORA-06512: at "SYS.ORACLE_LOADER", line 19
While i like to start CSS service to create new ASM instance in my own pc for testing purpose gettting the below errors "'localconfig' is not recognized as an internal or external command, operable program or batch file.".
I am importing some data from Oracle into another database on a regular basis. It works fine for most of the queries but couple of queries don't work sometimes (random). I don't get any errors or any data.
We switched on the Oracle auditing to find out the queries being sent to oracle db. We can see all the queries in the Audit log. Is it possible to configure Auditing to get the "Number of Rows" returned by Select statements so that we can be sure that some data was returned.
i have a nightly import ( about 20 tables ) and it takes up to 5 hours..we have one table of about 800,000 lines and the rest are between 1000 and 200,000 this is very slow when i monitor the import i see a very long amount of wait for the SQLnet from client ,
i run the import on the Database server itself .. if i check the current statement i see it's moving from one to one for instance i have
SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001329497' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (bonus_nat) <= 31) then SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001684584' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (outcome_cd) <= 1)
etc and it takes hours DB is on windows 2003 runnin oracle RDBMS 9.2.0.7 while the import screen show 185000 lines imported..I also see a lot of consistent gets for this sessions raising at that time..Would it be better to export import without statistics ?
I need also to mention that the dump file comes from a linux hosted Database don't think it will make the difference for a exp/imp.It's a peoplesoft Database there are a lot of tables more than 15000 and if i take the table mentioned above and i want to check its constraints it takes decade before toad can display them.I have seen that we have a incredible amount of constraints on those tables it might be the reason .
I just wonder if the system catalog needs to be tuned ? /* Update */ why but now the huge number of wait is no as "Library cache lock".
I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count FROM sys.dba_audit_session WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')
I have a situation where when I login as a user to my DBvia sqlplus no service name it takes about 20 secs to connect.Yet when I login as a user with DBA privs it logs in immediately.
Is there something I can do to trace what is happneing behind scences to determine what the login delay may be..
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
I have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.
Users use front end (called ESS Console) and when they try to open one of those tables they wait very long (really bad performance). Sometimes the GUI even hanging without displaying results.
Does Partitioned Tables feature works for better performance?
I cannot run a Scheduler remote external job on Windows. Here's what I'm doing:
--Check that XDB isinstalled: desc resource_view --Check that MTS is running sho parameters dispatcher sho parameters shared_Server --Setup an http listening port: exec dbms_xdb.sethttpport(8080)
[code].....
This is the error:
orcl> select job_name, status, error#, actual_start_date, additional_info 2 from user_scheduler_job_run_details where job_name='TRYIT'; TRYIT FAILED -1.074E+09 15-JUN-11 02.16.54.583000 EUROPE/LISBON
[code].....
I am certain that the communications and the credentials are correct, I've tried variants and get different errors. I think the problem is the job action. I've tried running batch files as well as OS commands, same result. THere is nothing useful in the core dump. Is there perhaps a Windows specific technique for running external jobs? Some way of nominating the batch file, or specifying a command interpreter?
I have got a procedure that successfully creates an oracle external table and populates it with the contents of a file. This works fine until I have a situation where one of the fields is a VARCHAR2(2) and I try to insert say, a 5 character value. When this happens the record in question does not get populated in the external table (and rightly so), but I could do with working out if there is a discrepancy in the number of records in the file and the number of records that actually make it into the table so I could inform the user that there is a problem.
I have attached the code that creates the external table and populates it.
how change the default directory path from server to our local system directory in external table while loading the data from csv file to table actually my default directory 'abc'(installed oracle server directory) in external tables , now i want to change that default directory to my local(c:Sm(not installed oracle s/w)).
I want to load lakhs of records into a table. My problem is when after loading the ¼ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?
I am familiar with tool Netca. However there is one more utility exist for the same functionatlty which is netmgrI checked with many DBAs for the exact difference, however I did not get the best answer from them. I also have checked in google but not exactly got the difference. list the exact difference between those 2 tools (netca, netmgr)
I have exported data of one user an importing into another schema at another server. when i am trying to imoport it is working fine for quite no of imports into tables, but after some time it starts giving me below mention error...
IMP-00008: unrecognized statement in the export file: < IMP-00008: unrecognized statement in the export file: < IMP-00008: unrecognized statement in the export file: <ے IMP-00008: unrecognized statement in the export file: +A IMP-00008: unrecognized statement in the export file: [code]...
I have a requirement to read flat text file(around 15000 lines) residing at a client location from DB server and write into a table in One cell.
I tried UTL_FILE and DBMS_LOB but, i am not able to access client location to read the file as it reads path from Oracle Directory.
eg. my client path is 198.168.1.1 and my DB server is in unix say 192.168.1.10. file location is: \192.168.1.1shareabc.txt So I created One Oracle directory as MY_DIR having DIRECTORY_PATH as '\192.168.1.1share'. But both UTL_FILE and DBMS_LOB is not able to access the file.
Error Message: ------------- Unable to process CLOB -22288 ~ ORA-22288: file or LOB operation FILEOPEN failed No such file or directory
Few Details for reference: ------------------------- File Location: \192.168.1.1shareabc.txt Unix DB Server location: 192.168.1.10 Table : Test (filename varchar2(30), Content CLOB) Oracle Dir: MYDIR Directory_Path: \192.168.1.1share
I've a question regarding difference of character sets, while taking a export(logical backup) of database on directly to server(linux RHEL 2.1 AS) and export on a client (windows xp prof machine, where only a oracle 9i client is installed). On server it seems to fine and okay, but on client node i'm getting following error for almost all tables.
EXP-00091: Exporting questionable statistics.
My question is :
[1] Is it creating any sort of problem, if later on i import the data which was taken from client node.
[2] Why there is a difference(marginal) in dump(.dmp) file size.
[3] Is there any way to overcome it, or it is the natural behave of it. Means not a problem.
[4] If i'm using a long or blob as datatype for some of my table,is they have any problem if i persist like above.
Additional Information about character sets On server node :
Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8ISO8859P1 character set (possible charset conversion)
On client node :
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses US7ASCII character set (possible charset conversion)