I am working on an SAP application migration project using Oracle 10.2.0.2 database. We are migrating the application from Windows to Solaris.
During the process we are facing problem with very slow insert operation on a particular table.The server's capacity is very good and so no resource bottleneck.
The table contains around 2,70,000 rows and inserting at around 100 rows per 10 seconds.
The table contains following data types.
SQL> desc SAPDATDB.CAF_GP_VALDEF;
Name Null? Type
----------------------------------------- -------- ----------------------------
VAL_UUID NOT NULL NVARCHAR2(34)
VAL_GUID NOT NULL NUMBER(10)
VAL_CLOB NCLOB
I am migrating a oracle 9i database to 11g r3. I can only use imp. As the database is huge, I have split the exp dump by schemas. In my recent test, i have split the schema into 4 seperate threads to be imported into the new oracle11g database. The 4 thread of imp consist of almost similar sizes of schema (Eg thread 1 - Schema 1, 2 ,3. Thread 2 - Schema 4,5,6 etc)
All the dump files are in the same mount point. When i execute the import (4 threads) together, the total import timing is each thread is between 2.5 days to 3.5 days.
Then i proceed to try only 1 thread, only 2 hrs. So could this be a IO issue or oracle memory problem?
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
I am receiving the following error while inserting records.
"Oracle Error:ORA-12899: value too large for column "MFG_ADMIN"."GAGE_RESULTS"."COMP
whereas I am checking the length of all values before inserting and am sure that none of them are larger than column lengths.I did some research and found this error might be due to character set.
select * from nls_database_parameters where parameter like '%CHARACTERSET';
PARAMETER VALUE ------------------------------ ---------------------------------------------------------------------------------------------------- -------------------- NLS_CHARACTERSET UTF8 NLS_NCHAR_CHARACTERSET UTF8
how can this be related to my problem or if something else is causing this error.
Insert statements from Java are processing very slow i.e 1L records taking 15Mins on every row commit ( Batch insert is not applicable that's why they are committing on every row). - Only 1 index
I've been using datapump for a long time now but I have not come across this problem before.
Importing just two tables: Table1 data=100Mb=11 million rows Table2 data=4.2Gb =19.6 million rows
Table1 ran for approx. 5 hours Table2 ran for approx. 15 hours
If I run the impdp importing both tables in the same par file the default tablespace of the users the import is running as runs out of space due to ORA-01691: unable to extend lob segment <owner>.SYS_LOG0001175799C00045$$ by 512 in tablespace USERS. I do not understand why it is creating objects in order to import tables into someone elses schema.
The environment is Red Hat LINUX 4.1.2-51 running Oracle 11.2.0.1 of Oracle11gR2. This is a 9 node RAC using ASM.
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size; NAME TYPE VALUE ----------- streams_pool_size big integer 48M
BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - ProductionOS version:
Windows 7 64bit I have schema(scott) export with schema level option and imported with different name as (scott1).At regular period of time i need to import the scott to scott1 without affecting existing records.such as *1. Need to append new created records.* *2. Need to append updated records.*
for the above requirement I did in the following way expdp xxxx/******** schemas=SCOTT directory=dumpdir dumpfile=SCOTT_28-SEP-2012.dmp logfile=exp_SCOTT_28-SEP-2012.log imported in the following way impdp xxxx/******** AS SYSDBA REMAP_SCHEMA=SCOTT:SCOTT1 directory=DUMPDIR dumpfile=SCOTT_28-SEP-2012.dmp logfile=imp_SCOTT2_28-09-2012.log TRANSFORM=SEGMENT_ATTRIBUTES:n TABLE_EXISTS_ACTION=APPEND.
The problem is i couldn'table to append the records to existing tables the log error show such ways.
ORA-31684: Object type USER:"SCOTT1" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA [code].....
Import: Release 11.2.0.3.0 - Production on Tue Apr 23 13:10:51 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "IMPDB"."SYS_IMPORT_TABLE_01": abc/******** directory=DATA_PUMP_DIR network_link=TESTAR logfile=net_import_proddev.log TABLES=impdb.abc parallel=12 REMAP_SCHEMA=IMPDB:ABC
aix 6.111.2.0.3 I have an expdp dump from prod to be imported to our test database.I have imported it using impdp, but to my surprise the tables were imported but lots of indexes were not created? even If I have used TRANSFORM=SEGMENT_ATTRIBUTES:N just to use the default USERS tablespace. How do I import the indexes separately, skipping the tables and other objects?
I try to transfer data from one database to another one through data pump via SQL Developer (data amount is quite important) exporting several tables. Tables export is doing fine, but I encounter the following error when I import the file (I try data only and data + DDL).
"Exception: ORA-39001: argument value invalid dbms_datapump.get_status(64...= ORA-39001: argument value invalid ORA-39000: .... ORA-31619: ...
The file is in the right place, data pump folder of the new database. User is the same on both base, database version are similar.
I am trying to migrate a table to a new table that has the field sequence changed and also has a new field added. My main question is if it is possible to have datapump add values to the new field in the target table.For example:
-original table has fields a, b, d, c -new table has fields b, c, d, a, e
I want to load the new table and also include adding values for field e. In this case, field e is a year field, so it should be loaded with '2012'..Does datapump have the ability to do this? Is reorganizing the fields going to cause me any problems? We are on oracle version 11.2.0.3
When I do the import the of succeeding dump, I drop the existing schema "SQL> drop user username cascade;" and import dump by " impdp system .... ". I would like to import a dump to an existing instance but only data import and will leave the current packages and other metadata untouched and unchanged on the said existing instance.
1. Do i need to drop user before the import if my requirements are the above?
2. If i need to drop user, what should be script.
3. For the import itself, what parameter should i use?
4. What are the necessaries I need to consider before doing the import.
Imagine you have 100 schemas backed up (expdp) in a dumpfile and you want to import just one schema from that dumpfile in a DB. You can specify just that one schema you want using SCHEMAS parameter in the impdp. But things are not straightforward when you want use REMAP_SCHEMA.
Here is my scenario: ===================
I took the expdp dump of schemas A and B in one go. So, dumpfile has objects from both A and B.The dumpfile name is : schemas_AandB.dmpNow , I want to create schema C from A using REMAP_SCHEMA parameter
-- Putting each parameter in a separate line for readability impdp PSTREF/PSTREF_123 DIRECTORY=ADET_EFX_DIR DUMPFILE=schemas_AandB.dmp LOGFILE=CreatingCfromA-Impdp.log REMAP_SCHEMA=A:CEverything goes fine. Schema C is created from Schema A in the dumpfile.
But impdp is trying to create schema B as well because schema B was present in the dumpfile. Since the schema B and its objects are already in the DB , I get the following errors.
ORA-31684: Object type USER:"B" already exists ORA-31684: Object type PROCEDURE:"B"."SP_CLEAREXPIREDSESSIONDATA" already exists ORA-31684: Object type PROCEDURE:"B"."SP_DELETESESSIONDATA" already exists ORA-31684: Object type PROCEDURE:"B"."SP_DELETESTATECONTEXTINFO" already exists
[code]...
Trying to avoid schema B in the dumpfile from being imported by specifying SCHEMASBut I got the following error ORA-39065: unexpected master process exception in MAIN ORA-12801: error signaled in parallel query server PZ99, instance oracth214:HEWRAC1 (1) ORA-01460: unimplemented or unreasonable conversion requestedMaybe REMAP_SCHEMA and SCHEMAS parameters won't work together.
Is there any way to prevent the impdp from importing user B and its objects ?
I have taken expdp dump from 11g database running in Development env.Now i want to import this dump into 10g database running in QA env.
while taking export from 11g database i used this script and backup was sucesssful
expdp system TABLES=sss_exp_test.EXP_SB_HEADER_TMP VERSION=10.2 DIRECTORY=RMSDEV_IMP_DIR DUMPFILE=EXP_SB_HEADER_TMP.dmp LOGFILE=EXP_SB_HEADER_TMP_expdp.log
When i trying to import this dump in 10g getting error.
impdp system TABLES=sss_exp.EXP_SB_HEADER_TMP DIRECTORY=RMS_DATA_PUMP DUMPFILE=EXP_SB_HEADER_TMP.dmp LOGFILE=EXP_SB_HEADER_TMP_impdp.log
Import: Release 10.2.0.5.0 - 64bit Production on Sunday, 14 October, 2012 19:58:53
Copyright (c) 2003, 2007, Oracle. All rights reserved. Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39040: Schema expression "SCHEMAS" must identify exactly one schema.
run down of the implications of MERGE by ROWID in such a fashion:
CODE MERGE INTO XXWT_AP_ACCRUALS_RECEIPT_F EXT USING ( SELECT PO_DISTRIBUTION_ID,
[code]...
Can this lead to an "Unstable Set of Rows?". Is it possible for the ROWID's to change during the execution of this statement - meaning certain ROWIDs identified in the SELECT will not actually be updated when it comes to the MERGE operation?
Basically, is it sound practice to use ROWID to merge on - in cases where you dont have a WHEN NOT MATCHED condition?
The issue is slow insertion in particular table(i.e A Table) it means insertion in all other tables(i.e B, C, D tables) in same schema is going properly but only when i am trying to insert in one particular table(i.e A table) in same schema it takes long time to complete insertion. Daily insertion is 6000 rows.
I have check all the details like Tablespace size, Analyzing of table, Analyzing of indexes and all. There is no any error alertlog file.
We have two database instances on the same server. One was left at 9.2.0.7 and one was upgraded to 10.2.0.3. Connecting externally (sqlplus '/as sysdba') to the 9.2.0.7 database is lightning fast. Connecting externally to the 10.2.0.3 database is very slow, comparatively speakiing. This is on an IBM AIX-5L (64-bit) machine. We are using "tnsnames".
We have done cloning of our ORACLE APPLICATION(11i),after that performance of ERP is getting slow (like fetching of data). What we can do to increase the performance.
I am using the dblink to merge the data. I am using the following merge statement.
merge into APP_USER.USR_NEW_RIGHTS@NEW_RIGHTS t Using (select 'test' GRANTEE,'TESTxxx'ROLE from dual ) s on (t.GRANTEE = s.GRANTEE and t.ROLE = s.ROLE) when not matched then insert (ID,GRANTEE,ROLE,XRIGHT,COMPANY,OWNER,TABLENAME) values ('','test','TESTxxx',null, null, null, null);
I know that I have to set a commit and it's working when I insert information's with a normal insert statement via database link, but it seems that merging doesn't work.
then these values are present as substring in the particular column in the source view. So I need to flag those records. For every record, I need to check whether all the values present in the reference table matches or not. If it matches then it should be flagged.
I can use in operator as we are not checking for the exact match and we are checking whether that value is present anywhere in that column record.
Looping results in performance issue. We can use PL/SQL for this. As the source view is put into a ETL internal file.