Performance Tuning :: Importing Partitioning Rows From Many Export Files
Aug 30, 2011
1) I have 5 Exported Dump files.
2) All of those 5 dump files were taken in different time periods.
3) Many of those Dump files are having the same Partition records.
eg:-
Dump 1:- 01-06-2010 to 31-11-2010
Dump 2:- 01-09-2010 to 31-12-2010
4) Now i want to import all those partitioning data into a single table, without having any duplication.
View 2 Replies
ADVERTISEMENT
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Mar 28, 2012
What could be the strategy on deciding which columns to create partitions on? I understand for deciding this, first, we need to know the columns we are using in the WHERE clause
consider following scenario assume that emp table is very large
(1)
Query - select * from emp where empno=<pk_value>
what could the partitioning column here?
This is confusing as we access with, quite selective criteria here but we access lot of data. No particular date range Or No particular flag, value to check with! Would hash partition on the pk_column will be useful here?
(2)
select * from emp where empno=<pk_value> and deptno=<some value> what could the partitioning column here? I assume deptno here. Right?
In general what could be the considerations in deciding the partitioning columns? whether the column is not a unique key column Or the column is preferrable if used in joins Or the column is not updateable
Finally will the pruning (take place) if the query spans across multiple partitions, though Not all partitions?
View 21 Replies
View Related
May 4, 2013
I am doing partitioning in the table ap_invoices_all and i have gone through all the process of i am making a script for a table and process
1. Create the new partitioned table with the same column structure as the original and with the partitions.
2. Insert data from the original table to the partitioned. Use parallel DML.
3. Rename the indexes of the original table
4. Create indexes to the partition table with the same columns as the original indexes.
5. Save the source code of the original table triggers
6. Rename the triggers of the original table to OLD
7. Do the table renaming. Rename original table to OLD and the partitioned table to original.
8. Drop the synonyms for the OLD table and recreate to point to the new partitioned
9. Grant the appropriate privileges to new partitioned table.
10. Create the triggers to the partitioned table
i want to know do i need to copy the constraints of the original table to the partitioned table?
View 10 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Jul 13, 2013
I need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 Replies
View Related
Oct 14, 2010
I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.
View 2 Replies
View Related
Oct 14, 2010
I'm trying to write a query that counts how many sessions are active during a 1 second time interval, then returns the maximum number of sessions active during any time interval, and all the time intervals that hit that max.
Here's a sample of the inner query results:
"INTERVAL_VALUE""SESSIONS"
"13:14:47" 13
"13:14:52" 13
"13:14:54" 13
"13:19:05" 4
"13:19:28" 4
[code]....
The max(sessions) is 13, so what I want the final output to be is:
"INTERVAL_VALUE""SESSIONS"
"13:14:47" 13
"13:14:52" 13
"13:14:54" 13
Here is the create sql for the test data:
CREATE TABLE "SESSION_TABLE"
(
"SESSIONKEY" NUMBER,
"SESSION_START_TIME" TIMESTAMP,
"SESSION_END_TIME" TIMESTAMP,
CONSTRAINT "PK_SESSIONKEY" PRIMARY KEY ("SESSIONKEY")
);
[code]....
Here is my query that works:
SELECT
maxval.interval_value,
allval.sessions,
licenselimit
FROM
(SELECT
[code]....
View 2 Replies
View Related
Sep 4, 2011
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
View 6 Replies
View Related
Jan 17, 2011
There is coulmn called DATA in a table with LONG RAW datatype. we are facing more than 60% chained rows in this table because of this LONG RAW column.
It is very difficult to clean up these chained rows periodically. Since an application using this table is a business critical interms of high availability.Hence, is there any other way in oracle to avoid chained rows permanently in future?
View 5 Replies
View Related
Feb 11, 2011
Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.
Original query and plan
SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION
FROM AIMUSER.SEGMENT_JOBS
WHERE SEGMENT_JOB_ID NOT IN
(SELECT SEGMENT_JOB_ID
FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ;
[code]....
Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.
View 6 Replies
View Related
Feb 15, 2011
I am trying to update a million rows in one table with the values from another tables.
Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)
The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.
The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below
The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.
View 1 Replies
View Related
Aug 25, 2010
Is there any way i can Get how many rows are processing with UPDATE statement while the Update statement is still running.
View 2 Replies
View Related
Apr 25, 2010
I have a big problem that came up latly which is importing XML files into oracle database.The point is that I have extracted whole PostgreSQL database into XML files - 236 tables - 1 XML file for every table and now I'm about to import them into Oracle tables. First of all, I would like to point out that I already have the structure of all the tables in oracle database, the files only carry the data (records) that need to be imported into oracle.
I've been trying to make it running and I can't do anything more serious about it for over a week..I will show You all example:
insert into ps_sprawozdania(miesiac, umowa_rok, umowa_nr, nr_korekty, nazwa, data_potw, data_exp, status_potw)
select
extractvalue(column_value,'/NewDataSet/Cust/miesiac'),
extractvalue(column_value,'/NewDataSet/Cust/umowa_rok'),
extractvalue(column_value,'/NewDataSet/Cust/umowa_nr'),
extractvalue(column_value,'/NewDataSet/Cust/nr_korekty'),
extractvalue(column_value,'/NewDataSet/Cust/nazwa'),
extractvalue(column_value,'/NewDataSet/Cust/data_potw'),
extractvalue(column_value,'/NewDataSet/Cust/data_exp'),
extractvalue(column_value,'/NewDataSet/Cust/status_potw')
from table(xmlsequence(xmltype(bfilename('c: est','ps_sprawozdania.xml'))));
That was one of my attempts to import data from file "ps_sprawozdania.xml" into table "ps_sprawozdnaia" into oracle. Here are 2 records from the XML file to show you Its structure
<NewDataSet>
<Cust>
<miesiac>7</miesiac>
<umowa_rok>2008</umowa_rok>
<umowa_nr>051/210412/01/000/08</umowa_nr>
<nr_korekty>0</nr_korekty>
<nazwa>Sprawozdanie z realizacji umowy nr 051/210412/01/000/08 za miesiÄ…c Lipiec</nazwa>
[code]....
handle with XML data, not XML files.
View 39 Replies
View Related
Jul 7, 2012
Where filter middle_rows save before join and grop by operation?
It is save rows in PGA Private SQL Area or save blocks in SGA databuffer?
View 11 Replies
View Related
Sep 16, 2011
I have a rather complicated process to import text files into my DB.I'm given thousands of files every day, separated by "," and with 80 fields each. With a bash script, I take the 45 fields I need and then split each file into x number of files grouping the rows by three fields.Then I use SQL Loader to insert them into de DB.
The problem is that now I must insert on two tables and the "WHEN" clause doesn't allow the use of > and <.
To make things a litle clearer take this text file (already splited and grouped and ready to be inserted):
...
1,1,135,1900,0,12,114,2011/08/25 17:19:00,135,...
1,1,135,1900,0,13,119,2011/08/25 17:19:00,136,...
1,1,135,1900,0,14,117,2011/08/25 17:19:00,137,...
1,1,135,1900,0,15,113,2011/08/25 17:19:00,138,...
1,1,135,1900,0,16,119,2011/08/25 17:19:00,139,...
...
When field 6 is higher or equal to 14, it must go to table a.When field 6 is lower than 14, it must go to table b.I can't use external tables as I'm in a different server.
View 1 Replies
View Related
Dec 15, 2012
There is a report that is generated everyday in the .csv file format that i would want to load into the database. I want to completely automate this process. I don't want to use the load function available in APEX. I use APEX 3.2 Version.
What i mean by automation is that i will create the CSV file and automatically move to a location from which APEX can access the data. I would then write a procedure to fetch the content directly from CSV file and write to the database. What i need is the location from which APEX can access data directly ?
View 1 Replies
View Related
Jul 16, 2013
An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.
The same query in a test database completes in less than 2 mins.
I would like to export the sql plan from test database to prod database.
how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.
View 2 Replies
View Related
Apr 22, 2013
i am trying to import full db export using datapump , i have too many errors for objects that is already exist . attached is the log file . thae steps i did so far
1- created the database .
2- imported the full db backup using
impdp system/xxxxxxx full=yes directory=datapump dumpfile=palbe_full_20130322.dp log=palbe_full_22042013.log
View 5 Replies
View Related
Sep 26, 2012
I'm importing a dump using this parameters:
impdp system schemas=schemaname directory=DIR transform=segment_attributes:n:table dumpfile=FILE.DMP logfile=FILE.logand upon import, i have this error.
Failing sql is:
GRANT SELECT ON "schemaname"."tablename" TO "NAME"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'NAME' does not exist
Failing sql is:
I know that "NAME" was created on the previous instance either role or user where the dump came. My question is, how can i remove this error since this role/user is not needed to the new instance and what parameter should i include to my import script?
View 2 Replies
View Related
Jun 6, 2012
i am using datapump to import database from 10g to 11g . all the tables and users everything got transferred but some grant permissions (create session) on users ,not importing to 11g. but same process imports grant if if do datapump to another 10g db .
do i need to import grants separately for 11g .
View 18 Replies
View Related
Sep 18, 2012
I have a query regarding importing data in a partitioned table. let me make myself more clear with an example:
I have 1 month table that contains 30 partitions single partition for a single day on one machine say machine A. on another machine say machine B i create the same table with the same script which is on machine A for the same table. i loaded data till 1-15th of a month in Machine A table and rest of 15 -30 Days data into table on machine B at the end i want to import the data on partitioned table on machine B that is from 15th -30th to machine A table. I just want to know whether data is properly imported or not not or i need to specify something
I take export partition wise (15 -30th) 15 partitions dumps and imported into Machine A table. Is it possible that i can import day wise partition from 15th to 30th into a partitioned table which already contains data from 1st -15th partition.
I know this is possible
View 2 Replies
View Related
Oct 25, 2012
Im trying to import DMP file through Toad but below error while importing. My DMP file from 11G and importing into 10g server.
ORA-39000: bad dump file specification
ORA-39143: dump file "D:oracleproduct10.2.0adminorcl1dpdumpdumpfile1.dmp" may be an original export dump file
View 21 Replies
View Related
Jul 22, 2012
OS: RHEL
DB: 11.2.0.2
Every time i try to refresh my production DB with the a old expdp dumpfile using data pump i always face the issue of grants and creation of synonym. I would like to tell you that my DB has three schemas which have lots of dependencies among them and before refreshing them i drop the schemas and recreate the same.
Drop user user_name cascade;So i want to know, is there a script from which i can get all the grants of the DB before dropping the schemas, so that after import i can grant the same and also a query with which i will be able to get all the synonyms of the DB.
View 8 Replies
View Related
Nov 20, 2012
I have a big table in which we load about 37M recrods. We have informatica ETL which Loads the data in bulk Mode and creats index after completion. The data load takes about 1Hr and Index Creation takes about 1/2 hr. In total it takes about 90 to 95 Mnts.
Now I thought if Partition and Load paralley, It will improve perfromance. We did 4 partition and and each Partition about 9M records. The data load in Bulk mode is completing in 25 Mnts. Again When I am creating index over it, It is taking about 40 Mnts. and in Total Load time is 65 Mnts.
Is there way I can better performance to complete the load in 1/2 hr ?
View 2 Replies
View Related
May 10, 2013
I did import from 9i to 11gr2 , 1. i create 11gr2 DB , 2.created tablespace with 8kb block, 3 imported 9i dump to 11gr2 DB.
Now iam getting SOME ERRORS: In IMP LOG
1. ORA-29339: tablespace block size 4096 does not match configured block sizes == for all the tablespaces.(But i create TBS with 8kb block before IMPORT)
2. ORA-23327: imported deferred rpc data does not match platform of importing db
View 4 Replies
View Related
Sep 24, 2010
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Mar 25, 2013
I want to export my oracle table to (Excel) Format . I am using Oracle SQL Developer 3.2. I know option to export table through GUI. But i want to achive through script level or procedure level.(SQL Developer 3.2). how to achive this i tried for several method but not get the proper output .
View 4 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related