We have oracle 9i database named A with 2 schema A1 and A2.User enters data at A1 schema and the incremental data moved to A2 schema after some verfications performed by some scheduled job on A1 schema.
we want to move this incremental data to B1 schema of another oracle 9i database B at the real time when data gets entered to A2 schema of A oracle 9i database. We have access to A2 schema of A database and B1 schema of B database.way to do it or best practice to do this activity or if there is any third party tool or any available oracle utility to perform it?
Note: A2 schema of A database and B1 schema of B database has one to one mapping.We want to avoid using trigger on A2 schema, how data gets populated from A1 to A2 schema .
I have upgraded oracle database from 9i to 11g using export and import utility. After migration we are facing performance issue in report generation, We have observed that First execution of report is taking very long time and when we generate the same report 2 -3 times there is considerable change in the execution time and it is more better than the first execution.
2 days back I have restarted the database and found the same issue. There are around 300 Reports and it is not possible to generate all the reports 2-3 times every time we restart the database.
With 11gr2, by default, on STARTUP, the standby database is open -> in READ ONLY mode -> with Intended State: APPLY-ON
so the ACTIVE DATAGUARD option is in use ....
is there a way to deactivate REAL TIME QUERY permanently, so whether on STARTUP or STARTUP MOUNT, the standby stay only mounted with Intended State: APPLY-ON
The only way i found is to do the following :
SQL> startup DGMGRL> edit database 'PHNXENT' set state='APPLY-OFF'; then SQL> startup mount DGMGRL> edit database 'PHNXENT' set state='APPLY-ON';
What is the need of Temp table in Oracle ? what is the advantage of Temp table over normal table ? Is temp table a log operation one. What is the scenario of using temp table in real time ?
I have a primary database orcl and logical standby database orcl_std.
Real time apply is enabled. I have standby redologs in both primary and standby sides and I`ve started recovery with below command:
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
When I create a new table in primary database, I am unable to see it on standby database (Although real time apply is enabled) However, when I switch log in primary, I can see the new table in standby database.
My question is, why realtime apply is not working in my scenerio ? I was expecting to see the new table immediately in standby database once it is created in primary database. Why am I supposed to wait for log switch in real time apply ?
CREATE OR REPLACE TYPE sample_object IS OBJECT (id NUMBER ,name VARCHAR2(30)); / CREATE OR REPLACE TYPE sample_table IS TABLE OF sample_object;
I have read some docs ,but I didn't get any information where exactly we use this.provide one real time scenario with an example.How this is different from record.
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D' Slow run PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)| | 1 | SORT AGGREGATE | | 1 | 15 | | |* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)| |* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)| ---------------------------------------------------------------------------------------------- [code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order. IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
We have our current production running on 9i and eventually want to migrate to 11g-r2. But the challenges are as follows:
9i production is running in San Francisco data center. 11g-r2 Production already setup up and running in Atlanta data center.
The database size is around 2 TB running on 9i. We are looking to transfer this to 11g-r2 and wondering, what options we have at our disposal: I was looking into EXP/IMP, but somebody said, dblinks will be much faster and reliable than EXP/IMP.
we're planning a data migration from an application (oracle-based) to another (also with oracle db).
the origin is a ca. 80 GB database. so lots of millions of records are to be migrated. (before loading records into the destination tables, they have to be transformed).
the current concept is to receive all origin data in xml files, load them in a staging area (an own migration scheme in oracle), transform and load them into the destination tables.
we have three days for the whole migration (including extract from origin database, transform, load, backup after completion...).
my question is, that a migration with xml-files is a good concept. i think xml processing is much slower than doing the same with csv files. my proposal to migrate an oracle dump (so we got the original data in our staging area) was declined.
is migration mass-data with xml files good or are there performance or other issues?
I have created a new table, so that instead of having 26 columns for payment amount for each week, I want to have one column for pay amount and one column to represent the week as below
employee_id payroll_pay payroll_pay_week
how do I migrate the data from old table to the new table?
I need to migrate data from Mysql database to Oracle11g database.
a) is there any method available to import the all the sqls like table script,constraint scripts,data(insert ) script from Mysql.so that we can apply the sql directly to the oracle schema after making necessary changes(like datatype).
b) Is there any free tool available for the migration.
I need to generate in report using PL/SQL code for counting the number of rows of all the tables from source and target database..The report should consist of following columns..
table name|source table row count|target table row count|mismatch|..provide me the PL/SQL code?
I got an assignment to create Oracle 11g db. I will be provided the full datapump export dump of an Oracle 10g db in linux. I need to import it to 11g Database in Windows. I have no information about the tablespaces, users etc I have created db with system,sysaux,undotbs temp and users tablespaces.
We are planning to migrate data from an application called clintrace to another application called argus safety. Both the applications are related to pharmacovigilence safety operation. Both the applications functionality are similar. So both the database are having the same data though the table structures might be different. Both the database are oracle clintrace db is 9i and argus db is 11g.
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue. 2) I tried increasing the a.large_pool_size set to 512M b.sga_max_size set to 8G c.sga_target set to 8G from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,1,blk * End MDD root info (do not edit) set noexec_user_stack=1 set noexec_user_stack_log=1 * IBMdpo vpath_START (do not remove) * default SCSI timeout is 60 seconds * uncomment to change SCSI timeout * set sd:sd_io_time=0x1e forceload: drv/vpathdd * IBMdpo vpath_END (do not remove)
set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
I have Arabic data stored into below two encodings in oracle AL32UTF8 database
1 Million rows into WE8MSWIN1252 .5 million rows into AR8MSWIN1256
in all cases I like to convert 1 Million row of WE8MSWIN1252 into AR8MSWIN1256. I could convert the data encoding from 1252 to 1256 using SQLdeveloper. But no luck using oracle export/import utility (both exp and expdp)…. I’m thinking may be certain locale is required for export/import to work.
Also my company said SQL developer is free utility may not be supported by oracle so use export and import for this, I need to convert only one table.
We should migrate our 10gR2 single-instance database with conventional file system to a two-node 11gR2 RAC on ASM (on same Windows Server platform…).
How can I migrate my production database using data pump? I have full data pump export from target but I don’t know how to import, whether the scheme after scheme, full import, do I need to first create manually tablespaces on destination, whether to exclude the index, constraint, statistics?