I need to prepare script to move all objects from one tbs to another tbs. Should I move all the objects individually using "alter table" Command. I got all the objects information using "DBA_SEGMENTS" view.
I have more number of tables,indexes in that tablespaces.
I am working on modifying various existing reports that other developers have created in the past. I noticed that on some of the layouts I am able to move frames and other objects by very small increments.
Yet in other reports if I try to move something just a tiny but up or down for example, it moves it by a very large increment. Is there some way, or some setting I need to set, in order to be able to move objects by smaller increments?
"All data you create in this tablespace will be encrypted using an AES256 encryption key. You cannot encrypt an existing tablespace. To encrypt data, first create an encrypted tablespace, then use alter table move, CTAS or datapump import to move your data into the encrypted space. Remember to drop the old tablespace BUT not including datafiles. Use an OS schred program to remove the old datafile. If you are on ASM you may use the including datafiles option since you can’t schred files from the OS inside an ASM instance."
But i want to know why we should NOT drop the including datafiles, when dropping tablespace (so 'drop tablespace my_tbs including contents and datafiles'). So what option should we use when dropping tablespace?
Why we should use OS capabilities to remove the datafiles?
What happens if i remove the datafile when i drop the tablespace?
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
How to import dump into specific tablespace instead of default tablespace users.
I want to import my dump file to newly created tablespace ,so how can i do that . I have created new user called cvm and while creating it i mentioned default tablespace to newly created tablespace . But when i try to import my dumo file it goes to users tablespace .
i have a tablespace which contains 121 datafile(max limit reached) as a dba what we have to do?
creating a new tablespace with a datafile and assign the users to the current tablespace which i created now.iif the above process is correct,after some time the tablespace which was filled up got freed up.now can i give the access to the users previous (i.e. freed up tablespace) and current tablespaces
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production PL/SQL Release 11.1.0.7.0 - Production CORE 11.1.0.7.0 Production TNS for Linux: Version 11.1.0.7.0 - Production NLSRTL Version 11.1.0.7.0 - Production
My os version is
Linux damdat01 2.6.18-128.7.1.el5 #1 SMP Wed Aug 19 04:00:49 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
My database is OLP system.
My question is what are the advantages and disadvantages having one single tablespace versus multiple tablespace?
Easy to maintain when you have single tablespace. but hard to track the IO issues if you have one single tablespace.
I need to move everything from database 'X' to database 'Y' (assume: both are Oracle 11.2).Which should be the most appropriate way to achieve that?I thought of Transportable Tablespaces- URL..... but I'm worried about it's limitations especially this:SYSTEM Tablespace Objects - You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences This means all PL/SQL code- Packages/Procs/Functions will be affected.
I'm trying to run a report that has a moving date and I need to find data that's within 12 months of that certain date.
So for example... customers come in everyday all year long. I wanted to find the number of unique customers in a year. But the year is moving... So 1 year from 1/15/2011 is 1/14/2012. And 1 year from 1/16/2011 is 1/15/12. So I had something like this but doesn't quite work..
SELECT ...
NVL(COUNT(DISTINCT CASE WHEN TX.DATE_OF_FIRST_VISIT BETWEEN TO_DATE(TX.DATE_OF_FIRST_VISIT,'MM-YYYY') AND ADD_MONTHS(TO_DATE(TX.DATE_OF_FIRST_VISIT,'MM-YYYY'),12) THEN (TX.CLINIC_ID||TX.PATIENT_UNIQUE_ID)END),0) AS "YEAR_1"
I have one database which is recently upgraded from oracle 8.1.5 to oracle 10.2.0.4.The database is having around 300 tablespace and total size of the database is 1.5 TB.
The database was created in oracle 8i and all the teblespace were DMT(Dictionary Managed Tablespace) .Usually after up gradation all the tablespace are in DMT mode. Now my requirement is to convert all the tablespace into LMT (Locally Managed Tablespace) so that I can AVAIL ALL THE FEATURES OF LMT.
This database is a mission critical database and very less downtime can be allowed.
Does it work with copying datafiles from 10g2 to 11g2? I want to move one huge tablespace (which contains one table) from 10g2 to 11g2, what is the best method to do that?
In my Production DB. 5 Datafiles created in same tablespace. Datafile size is of 25GB. Data stored in all Datafile. Data is just 5GB in all datafile. I want to move data from 5 datafiles to single or couple of datafiles.
I have imported data into database using sqlloader into flat table. Now I need to move the data from this table to another table. This is production system and I must keep it online. So I decided to make script that will move data in small chunks and commit frequently to avoid waits and table locks.
Regarding the script I have question. I can to the bulk load of rowids. Is it possible to optimize the insert and delete in similar way instead of doing insert/delete in loop for each rowid ?
declare type t_rowids is table of rowid; rowids t_rowids; begin loop select rowid bulk collect into rowids from ims_old.values_f2 where rownum < 1000;
One of our auditing recommendation is to move table AUD$ to a separate tablespace from system. Why this recommendation is important and how to do this action ?
I've moved a package to a new schema and all the packages in the original schema that reference the moved package now fail to compile. The moved package has had a public synonym created and the execute privileges assigned to the original schema by role. what am i missing? Using 11gR2 version 11.2.0.3.0
I have looked at the code you pointed me to, and have attempted to get it to work using a package, but I cant even get the package to compile..
CREATE OR REPLACE PACKAGE BODY trigger_api AS PROCEDURE tab1_row_change (p_numass IN varchar2, p_datcre IN date) IS BEGIN INSERT INTO tempjob (numass, datecre) VALUES (p_numass, p_datcre); END tab1_row_change; [code]....
Doing this process from code is not an option and MUST happen automatically via triggers.The mutating trigger error can sometimes be avoided: URL....
I am trying to create a trigger which does the following : A flag in the initial able is set to Y. When this happens, the record needs to be inserted into a history table and then DELETED from the calling table.
It must happen in triggers, but I keep getting the mutating error.I have tried to use a Compound trigger, but with no luck and just dont really understand how to get this to work.
Doing this process from code is not an option and MUST happen automatically via triggers.
I need to calcaulate the salary avarage for three days prior, leaving the current row. That should happen to every row moving back words.I have given all the details.
create table Employee( ID VARCHAR2(4 BYTE) NOT NULL, name varchar(20), Start_Date DATE, Salary Number(8,2), mv_avg number(8,2) [code]....
I need to move database ORCL into our existing central database CNTR (both are on same OS and oracle version) I started exp each schema from ORCL and imp in CNTR.
But there is one schema EXMP in database ORCL which also exists in CNTR database with same tables, indexes . The data under schema EXMP in ORCL should be added to schema EXMP in CNTR.
The steps to move OMF files in ASM. I tried the following and was not successful.
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/system.357.809972853" datafile 2 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/sysaux.363.809972837" datafile 3 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs1.365.809972737" datafile 4 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/users.361.809972859" datafile 5 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs2.360.809972761" datafile 6 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs3.359.809972787" datafile 7 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs4.358.809972811"
RMAN> alter database open resetlogs;
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of alter db command at 03/13/2013 16:30:05 ORA-01139: RESETLOGS option only valid after an incomplete database recovery
RMAN> alter database open;
so switch worked reset logs says can't use it there so I just try to open and it just hangs.
We have a p/slq procedure that reads a *.txt file using the UTL_FILE package. The contents of the file are then inserted into a database table.
At the end of the procedure we close the open file using UTL_FILE.FCLOSE.
There is a program (non-oracle)that attempts to move the file to a new location after being read into Oracle. The problem is that the application cannot move the file as the file is locked. ie message displays that the file is open and cannot be moved to a new location.
Is there anything else that we are missing besides the UTL_FILE.FCLOSE.
I am trying to create a trigger which does the following : A flag in the initial able is set to Y. When this happens, the record needs to be inserted into a history table and then DELETED from the calling table. It must happen in triggers, but I keep getting the mutating error. I have tried to use a Compound trigger, but with no luck and just dont really understand how to get this to work.
Doing this process from code is not an option and MUST happen automatically via triggers.