I need to move everything from database 'X' to database 'Y' (assume: both are Oracle 11.2).Which should be the most appropriate way to achieve that?I thought of Transportable Tablespaces- URL..... but I'm worried about it's limitations especially this:SYSTEM Tablespace Objects - You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences This means all PL/SQL code- Packages/Procs/Functions will be affected.
I'm trying to run a report that has a moving date and I need to find data that's within 12 months of that certain date.
So for example... customers come in everyday all year long. I wanted to find the number of unique customers in a year. But the year is moving... So 1 year from 1/15/2011 is 1/14/2012. And 1 year from 1/16/2011 is 1/15/12. So I had something like this but doesn't quite work..
SELECT ...
NVL(COUNT(DISTINCT CASE WHEN TX.DATE_OF_FIRST_VISIT BETWEEN TO_DATE(TX.DATE_OF_FIRST_VISIT,'MM-YYYY') AND ADD_MONTHS(TO_DATE(TX.DATE_OF_FIRST_VISIT,'MM-YYYY'),12) THEN (TX.CLINIC_ID||TX.PATIENT_UNIQUE_ID)END),0) AS "YEAR_1"
Does it work with copying datafiles from 10g2 to 11g2? I want to move one huge tablespace (which contains one table) from 10g2 to 11g2, what is the best method to do that?
In my Production DB. 5 Datafiles created in same tablespace. Datafile size is of 25GB. Data stored in all Datafile. Data is just 5GB in all datafile. I want to move data from 5 datafiles to single or couple of datafiles.
I need to prepare script to move all objects from one tbs to another tbs. Should I move all the objects individually using "alter table" Command. I got all the objects information using "DBA_SEGMENTS" view.
I have more number of tables,indexes in that tablespaces.
I have imported data into database using sqlloader into flat table. Now I need to move the data from this table to another table. This is production system and I must keep it online. So I decided to make script that will move data in small chunks and commit frequently to avoid waits and table locks.
Regarding the script I have question. I can to the bulk load of rowids. Is it possible to optimize the insert and delete in similar way instead of doing insert/delete in loop for each rowid ?
declare type t_rowids is table of rowid; rowids t_rowids; begin loop select rowid bulk collect into rowids from ims_old.values_f2 where rownum < 1000;
One of our auditing recommendation is to move table AUD$ to a separate tablespace from system. Why this recommendation is important and how to do this action ?
I've moved a package to a new schema and all the packages in the original schema that reference the moved package now fail to compile. The moved package has had a public synonym created and the execute privileges assigned to the original schema by role. what am i missing? Using 11gR2 version 11.2.0.3.0
I have looked at the code you pointed me to, and have attempted to get it to work using a package, but I cant even get the package to compile..
CREATE OR REPLACE PACKAGE BODY trigger_api AS PROCEDURE tab1_row_change (p_numass IN varchar2, p_datcre IN date) IS BEGIN INSERT INTO tempjob (numass, datecre) VALUES (p_numass, p_datcre); END tab1_row_change; [code]....
Doing this process from code is not an option and MUST happen automatically via triggers.The mutating trigger error can sometimes be avoided: URL....
I am trying to create a trigger which does the following : A flag in the initial able is set to Y. When this happens, the record needs to be inserted into a history table and then DELETED from the calling table.
It must happen in triggers, but I keep getting the mutating error.I have tried to use a Compound trigger, but with no luck and just dont really understand how to get this to work.
Doing this process from code is not an option and MUST happen automatically via triggers.
I need to calcaulate the salary avarage for three days prior, leaving the current row. That should happen to every row moving back words.I have given all the details.
create table Employee( ID VARCHAR2(4 BYTE) NOT NULL, name varchar(20), Start_Date DATE, Salary Number(8,2), mv_avg number(8,2) [code]....
I need to move database ORCL into our existing central database CNTR (both are on same OS and oracle version) I started exp each schema from ORCL and imp in CNTR.
But there is one schema EXMP in database ORCL which also exists in CNTR database with same tables, indexes . The data under schema EXMP in ORCL should be added to schema EXMP in CNTR.
The steps to move OMF files in ASM. I tried the following and was not successful.
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/system.357.809972853" datafile 2 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/sysaux.363.809972837" datafile 3 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs1.365.809972737" datafile 4 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/users.361.809972859" datafile 5 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs2.360.809972761" datafile 6 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs3.359.809972787" datafile 7 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs4.358.809972811"
RMAN> alter database open resetlogs;
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of alter db command at 03/13/2013 16:30:05 ORA-01139: RESETLOGS option only valid after an incomplete database recovery
RMAN> alter database open;
so switch worked reset logs says can't use it there so I just try to open and it just hangs.
We have a p/slq procedure that reads a *.txt file using the UTL_FILE package. The contents of the file are then inserted into a database table.
At the end of the procedure we close the open file using UTL_FILE.FCLOSE.
There is a program (non-oracle)that attempts to move the file to a new location after being read into Oracle. The problem is that the application cannot move the file as the file is locked. ie message displays that the file is open and cannot be moved to a new location.
Is there anything else that we are missing besides the UTL_FILE.FCLOSE.
I am trying to create a trigger which does the following : A flag in the initial able is set to Y. When this happens, the record needs to be inserted into a history table and then DELETED from the calling table. It must happen in triggers, but I keep getting the mutating error. I have tried to use a Compound trigger, but with no luck and just dont really understand how to get this to work.
Doing this process from code is not an option and MUST happen automatically via triggers.
I've inherited a DB where they are going to do a restore this weekend.
The current DB admin is using a db restore (not a duplicate). RMAN with no catalog. The current issue is that the DB restores fine, but when we do a delete obsolete after our backups it's asking if we want to delete the data files.
"All data you create in this tablespace will be encrypted using an AES256 encryption key. You cannot encrypt an existing tablespace. To encrypt data, first create an encrypted tablespace, then use alter table move, CTAS or datapump import to move your data into the encrypted space. Remember to drop the old tablespace BUT not including datafiles. Use an OS schred program to remove the old datafile. If you are on ASM you may use the including datafiles option since you can’t schred files from the OS inside an ASM instance."
But i want to know why we should NOT drop the including datafiles, when dropping tablespace (so 'drop tablespace my_tbs including contents and datafiles'). So what option should we use when dropping tablespace?
Why we should use OS capabilities to remove the datafiles?
What happens if i remove the datafile when i drop the tablespace?
We're currently in a situation where the primary database server fs size does not match the standby database server fs size.
Standby database filesystem is almost 100% utilized, and we suggested to move some of the datafiles first to avoid threshold alerts and archive gaps.
Now, if we're gonna move datafiles on the physical standby, I believe the process would be stop managed redo apply -> shutdown standby -> OS move -> startup mount -> alter rename -> start managed redo apply. Is this correct? If not, how?
Also, would it have an effect if the controlfiles of primary and standby do not match because of the movement?
We have Root disk on all the unix servers and is mirrored using SVM using the internal disks. Also VxVM 5.0 MP3 is used as the volume manager to manage the SAN space allocated to the servers. On some instances, SVM is also used for allocating filesystem for application and database
We are required to move all the files from SVM to VxVM.
In the process we are Copying our Oracle binaries from SVM to VXVM.I am planning to use the CPIO command to do the same.Once i move to VXVM (Veritas Volume Manager), i will bring down the SVM.
I have create Master block and a Detail block.When i am clicking on the Master block, then details records are displaying.But when i m moving from 1st record of detail block to 2nd record of master block, then detail block's data is not not displaying.It just firing the validation of detail block's first field.
when from Detail block (Cursor on the field Mapping code field) clicking on the 2nd records i.e Ord.No. =2 , at that time validation is getting fired.)
What i really wants that if click on the 2nd record of Master, automatically records should display in detail block.
My form has two list boxes and two buttons add and remove. As and when i click add button, the selected value from left hand side list item should get populated to right hand side list item. And When I click Remove button, it should do vice versa.