Server Administration :: Deleting Unwanted SYS Schema Objects?
May 25, 2011
I was importing one schema from Oracle 10g to 11g using traditional import. I imported as a SYS user, so all the objects created in SYS schema. how can I remove these objects and retain only default SYS objects
I am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
We transferred our Oracle database 11.1.0.7 from windows 2003 enterprise edition 32 bit to windows 2008 enterprise edition server 64 bit.Database is working fine but we have 53 uncompiled objects which are related to OLAPSYS and public as follows
I have a table, a superclass and several derived classes.
the table definition looks like this:
/*-------------------------------------------------------------------------------------*/ /* table to store ObjectData for the current session/transaction */ /*-------------------------------------------------------------------------------------*/ create table SessionObjectRegistry( oid number, transactid varchar2(200), sessionid number, hash number,
[Code]....
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkbnftn2], [], [], [], [], [], [], [], [], [], [], []The db is Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsThe error occurs on 11.2.0.3.0 - 64bit as well.
As a workaround i'm not deleting the rows, but setting the stored objects to null. At the end of the routine i can truncate the table with the objects since truncating works. I'm not the administrator of the database, therefore i may not change any settings.
Unfortunately i wanted to be able to share Objects between Sessions, which is possible with this workaround (simply don't truncate at the end), but leaves a lot of crap lying around.
Iam having the following query, After executing schema refresh using export & import , getting count of database objects comparison to be done,
-- SELECT 'TRUNCATE TABLE '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by TABLE_NAME; SQL> SELECT 'select count(*) from '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by TABLE_NAME;
The output expected was to display each table name in a schema following below with corresponding number of records to be displayed, but it wasn't showing correctly.
I get import error while trying to import objects into schema.
Export file created by EXPORT:V11.02.00 via direct path import done in US7ASCII character set and UTF8 NCHAR character set import server uses UTF8 character set (possible charset conversion) . importing DEMO's objects into TEST . . importing table "TAB1" IMP-00058: ORACLE error 1950 encountered ORA-01950: no privileges on tablespace 'USERS'
i understand we need to grant the user space resource on the tablespace as below.
ALTER USER <user> QUOTA UNLIMITED on <tablespace_name>
My another question is can we grant QUOTA UNLIMITED on <tablespace_name> to user ?
I imported a schema HR from export DUMP ....i can find all the objects of schema HR in the imported database... but i got an error for a plan_table which is assigned to USERS tablespace in the source database.. ...
HERE COMES THE ERROR I GOT DURING IMPORT:
[oracle@localhost mom]$ imp file=exp_schema_ref.dmp log=imp.ref.log fromuser=hr touser=hr commit=y Import: Release 10.2.0.1.0 - Production on Mon Jul 26 00:03:57 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. Username: sys/sys as sysdba
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via direct path import done in US7ASCII character set and AL16UTF16 NCHAR character set . importing HR's objects into HR . . importing table "COUNTRIES" 25 rows imported . . importing table "DEPARTMENTS" 27 rows imported . . importing table "EMPLOYEES" 107 rows imported [code]....
i have a tablespace with a datafile of 20g. now by mistake i delete the datafile and then try to delete the tablespace from EM but i got an error which says that data file is not present to delete
Now initially after deleting the file physically so then i check space by applying df -ah at os lvl so it didn't reclaim the space now i try to delete the tablespace from em so it gives me the above error. This might be due to tablespace existence. so how can i reclaim the space.
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
I am getting below error while connecting to sqlplus.
SQL*Plus: Release 10.2.0.5.0 - Production on Tue Mar 22 12:47:48 2011 Copyright (c) 1982, 2010, Oracle. All Rights Reserved. ERROR: ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_OUTPUT.DISABLE' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored
[code]....
Executed the below scripts but it didnt resolve the issue, whereas some of the SYS objects and catproc got invalid...
dbmsotpt.sql dbmsapin.sql
Now even after reexecuting the catproc.sql and utlrp...Sys objects and the catproc status is still INvalid.
I tried to manually compile the sys objects, but it didnt work.
OWNER SUBSTR(OBJECT_NAME,1,40) OBJECT_TYPE -------------------- ---------------------------------------- -------------------- SYS DBMS_XPLAN PACKAGE BODY SYS AQ$AQ_SRVNTFN_TABLE VIEW SYS DBMS_LOGREP_DEF_PROC PACKAGE SYS DBMS_LOGREP_DEF_PROC PACKAGE BODY
[code]....
how to go about making the SYS objects and catproc VALID and resolve the error which i mentioned above.
I am getting the following errors when I try drop a tablespace.
I already did the following.
a) The tablespace & its datafiles offline.
b) I have purged dba_recyclebin.
SQL> drop tablespace db_maintenance including contents and datafiles; drop tablespace db_maintenance including contents and datafiles * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-38301: can not perform DDL/DML over objects in Recycle Bin
One of my friends is facing a peculiar problem where objects are getting "Invalid" during execution I suspect it is happening as they are changing system date during their testing (time travel) which can create conflicted last_ddl_time on objects having dependencies
Consider a scenario
[1] system date is 10-06-2012 there are total 10 objects which has status as 'valid'
[2] the system date is changed to 10-07-2012 Now out of 10 Only 5 objects are compiled During execution ORA-04065,ORA-06508, ORA-06512 are observed
[3] the system date is brought back to 10-06-2012 Again during execution ORA-04065,ORA-06508, ORA-06512 are observed
suppose in step 2 objects are compiled whereas there synonyms are compiled in step 1, only thus last_ddl_time for objects will be later to that of its' synonym...
Does database validate last_ddl_time for objects having dependency during execution and then auto-compiles or invalidates the objects?
we are doing database upgradation 10g2 to 11gr2, while doing pre check before upgrdation ..we have found few duplicate objectes on sys and system schema..
SQL> column object_name format a30 select object_name, object_type from dba_objects where object_name||object_type in (select object_name||object_type from dba_objects where owner = 'SYS')
[code]...
As per metalink note "How to Clean Up Duplicate Objects Owned by SYS and SYSTEM Schema [ID 1030426.6]"..
im going to do drop only below objects ..
DROP TABLE SYS.HELP; DROP INDEX SYS.HELP_TOPIC_SEQ; DROP TABLE SYSTEM.PLAN_TABLE;
and ignore below objects ...
DROP TABLE SYSTEM.AQ$_SCHEDULES; DROP INDEX SYSTEM.AQ$_SCHEDULES_PRIMARY; DROP PACKAGE SYSTEM.DBMS_REPCAT_AUTH; DROP PACKAGE BODY SYSTEM.DBMS_REPCAT_AUTH;
I want to compare two users on different databases, actaully there are two users one in user a on database a and another user b on database b they have same tables, and everytime when a table or object is created on user a (database a) i will take the table name ,procedure or any other object from user_objects based on ddl_created date and then i need to recreate the same on user b on database b, is there a way to find out tables which are not only created but also i need to check whether if there is any column added or any change in procedure or any other objects.Is there a way to generate the scripts based on list of objects selected from user_objects. all i want is.
a)Find out the list of objects added along with creation scripts b)find out the list of objects modified along with creation scripts.
I have a problem some user modified the objects and now objects became invalid .How to trace the ip address and operating system username and service ip who modified the objects?
using SQLLDR: Looking for a control file solution to move past or bypass extra data fields which are not on destination table. Basically if you have 8 tab delimited fields(terminated by ' ') on a data record; but only need to load 5 of the values from the delimited record; is there a way to ignore/bypass the not needed data. Obviously, the answer would be to massage the data at the OS and removed the 3 unnecessary fields.
However my hands are tied by volume,time, and compliancy. I am familiar with using 'FILLER' for the reverse scenario; but not where you have more data available on the record then exists on the table.
I was wondering whether we all think of disadvantages or is it we always love the concept and use them..so recently i started out to think what may be the disadvantages of schema objects.
For example Packages has lot of advantages like Information hiding, Scope of varaibles declared in spec, dependency crisis etc, then i read somewhere that if a package is referenced then the entire package will be downloaded in the memory.
So my question is what if the package is very large would it accomdate in the memory without any leaks , i am assuming that its like LEAST RECENTLY USED mechanism. i have a question for you about disadvantages
1. Packages 2. Cursors 3. Collections, includes Associative arrays, VARRAY, nested table 4. materialized views 5. views 6. indexes 7. having more than 100 columns in a table need to spilt it or to keep in the same table.