I got following error when i am gathered stats on Schema level.
SQL> EXEC dbms_stats.gather_schema_stats(ownname=>'KDB', estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE, force=>TRUE);
BEGIN dbms_stats.gather_schema_stats(ownname=>'MFDB', estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE, force=>TRUE); END;
*
ERROR at line 1:
ORA-20003: Specified bug number (9196440) does not exist
ORA-06512: at "SYS.DBMS_STATS", line 15342
ORA-06512: at "SYS.DBMS_STATS", line 15688
ORA-06512: at "SYS.DBMS_STATS", line 15766
ORA-06512: at "SYS.DBMS_STATS", line 15725
ORA-06512: at line 1
BEGIN dbms_stats.gather_schema_stats(ownname=>'CDOMIG_DATA', estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>10); END;
*
ERROR at line 1:
ORA-20003: Specified bug number (9196440) does not exist
ORA-06512: at "SYS.DBMS_STATS", line 15342
ORA-06512: at "SYS.DBMS_STATS", line 15688
ORA-06512: at "SYS.DBMS_STATS", line 15766
ORA-06512: at "SYS.DBMS_STATS", line 15725
ORA-06512: at line 1
During STATS gather running for the table, unknowingly i deleted the old stats using EXEC DBMS_STATS.DELETE_TABLE_STATS. I would like to know will it affect the stats gather job currently running for the table and whether my stats will be gathered successfully.
I have doupts in gathering stats on table. I analyzed one table it took 2 hours first time.. the same table after one week later i analyzed, its got completed within 45 minutes.. I don't know exact reason why i got completed very soon. Is there any specific reason?
Using below script i run stats gather job..But LAST_ANALYZED column is not updated for the table.. Why it was not updated.. My DB version is 11.1.0.7.0
I load a table through sql loader which takes nearly 14 min for 8-9 millions records, once the records complete i run the analyze table compute statics to gather stats and it takes nearly 15 min. is there any ways so that i can reduce the stats timing. the stats collection command runs from other schema not from where the table is residing.
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
i have a large OLTP database and we are doing table stats copy amount subpartition to save the load on system. while doing an copy default subpartition stats: I see the following error:
SQL> exec DBMS_STATS.COPY_TABLE_STATS('cusms','STATUS_HIST','P_VDEF_10_2012_S100','P_VDEF13_10_2012_S100',force=>true); BEGIN DBMS_STATS.COPY_TABLE_STATS('cusms','STATUS_HIST','P_VDEF_10_2012_S100','P_VDEF13_10_2012_S100',force=>true); END;
* ERROR at line 1: ORA-03113: end-of-file on communication channel
I am trying to refresh the validation database with the old production backup i.e. our requirement. I have given the rman script i have executed and the output error message amd the rman configuration setting. Plus in the next post ill post the current sucessfully running RMAN script for your reference.
rman TARGET sys/passwd@Production CATALOG rman_pmxp/rman@catlog AUXILIARY sys/passwd @validation RMAN> show all 2> ; RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 90 DAYS; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'H:\backups\DB_Sunday_Backup\Prod_%d_%F.rman.ctl'; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '%F'; [code]...
I have to migrate the current production database to test 10.2.0.4 on windows. Any non-export way to upgrade 9i to 10 g?
i have following steps
1) ALTER DATABASE BACKUP CONTROLFILE TO TRACE 2) shutdown oracle 9i database on server A 3) copy database file, controlfile, redolog, and other files to new server B 4) alter the controlfile backup with new location of bdump, udump, and log file and data file locations 5) user oradim ORADIM -NEW -SID SID [-INTPWD PASSWORD ]-MAXUSERS USERS -STARTMODE AUTO -PFILE ORACLE_HOME\DATABASE\INITSID.ORA 6) start dabase in upgrade mode 7) run catpat.sql and util102.sql 8) take backup 9) open database for users
We are planning to consolidate our Oracle Production DB into one server. We are basically a windows shop. Is it feasible to run 8 production Oracle DB in one windows server. All the DB are not really transaction intensive DB. 2 DB in the size of 300GB and others all DB falls under average size of 40GB.
I can take care of the HD slicing so Oracle does not enter into IO bottleneck. We are planning to go for external NAS or SAN for storage.
My main concern is on processor usage. The processor we are thinking about is Intel Xeon Quad Core x 2nos. Will there be a processor bottleneck or is there way in Oracle to assign processor usage(I belive there is no much tweaking options here)
I need to use Data Pump for the first time on my production Database.Currently on Testing Database, when i am taking schema level export there are no errors or warnings in the log file but when i importing it gives fallowing ORA in the import log file. i searched on google,the only way i found is to recompile the invalid objects. how to avoid this warnings in log file.
"ORA-39082: Object type ALTER_PROCEDURE:"QUANTISV4"."P_CTM_ABN_INVST_EQUITY" created with compilation warnings"
We have quite a number of sessions in database MES (production) coming from another machine.
From v$session, the program is oracle@WID27 (TNS V1-V3). This WID27 (hostname) consists of quite a number of development databases inside. We have to trace which jobs are actually triggering this, as WID27 are not suppose to connect to production databases.
How can we tell whether the sessions came in is from dblink or from the machine itself?
Unless I'm not understanding this error it means that it can find the ACTION part that is attached to the WHAT part of this Dynamic Action? The Dynamic Action does work when the application is run (in Development)also, there are 3 others that are similar to this one. The export was created by the export utility in the Application Builder.
If I export only the page and import that into Production the import is successful and the page runs correctly. This is error is happening only when I try to import the entire Application.There are many other changes made which is why I was trying to do an Application export/import instead of individual pages.
I will have to proceed with Oracle 9 database refresh from production server to integration server. 5 biggest schemas must be exported and imported. They constitute 97% space used in a database. This is very big database so I would like to be sure that everything will go smoothly. That is why i want to ask you some questions.
Have you got any clues for me before I start with exp/imp? From my side i will tell you that I will have to exp/imp schema by schema because there is small space both on production and integration disk for a dump. First thing I thought are dependencies between schemas that are exported and that which are not, and also between schemas that are exported/imported one by one.
This is procedure that I plan:
For every schema that is to be refreshed { 1. Export schema with ROWS=N CONSTRAINTS=Y 2. EXPORT schema with ROWS=y CONSTRAINTS=N 3. Import schema from step one 4. Disable all the foreign key constraints using ALTER TABLE DISABLE CONSTRAINT. 5. Import schema with rows } ALTER TABLE ENABLE CONSTRAINT
With above procedure i think that I will avoid problems with dependencies between schemas exported/imported one by one. But my concern is if there are any dependencies between those schemas and schemas that are not exported. Is there an way to check it before refresh ?
I have to compare my SVN source code (packages, views etc) with the production code in the database like views etc (actually we are not sure that what we have in the svn is the final version of production code, we have objects created in the production database, but we don't have latest scripts for that. we have to deploy the svn code in the UNIX box).
So here the comparison is between the OS files and the database objects.
I thought I would get scripts of all the packages, views etc from the production database by using DBMS_METADATA or some utility and save the code in OS files then compare one svn file with OS file manually by using some comparison tools e.g toad provide one comparison tool.
Why do we gather table statistics manually ?Is it because of database performance.
I know In Oracle Database 10g, Automatic Optimizer Statistics Collection reduces the likelihood of poorly performing SQL statements due to stale or invalid statistics and enhances SQL execution performance by providing optimal input to the query optimizer.
Optimizer gathers statistics when 10% table rows have been changed.
I have problem/misunderstanding with gather schema stat utility of oracle. Herewith i'm posting my try and output of it. My main question is why the column 'LAST_ANALYZED' of dba_tables not updated on gathering fresh schema level statistics.
SQL>select OWNER,TABLE_NAME,NUM_ROWS,AVG_ROW_LEN,BLOCKS,SAMPLE_SIZE,LAST_ANALYZED from dba_tables where owner='STO' and rownum<=10 order by LAST_ANALYZED; OWNER TABLE_NAME NUM_ROWS AVG_ROW_LEN BLOCKS SAMPLE_SIZE LAST_ANAL
SQL> select OWNER,TABLE_NAME,NUM_ROWS,AVG_ROW_LEN,BLOCKS,SAMPLE_SIZE,LAST_ANALYZED from dba_tables where owner='STO' and rownum<=10 order by LAST_ANALYZED;
I am connect to remote database with a user named 'TEST', this user has dba privileges. I am not able to gather the statistics of neither test schema nor for any table that exists in this schema.
SQL> EXEC dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); BEGIN dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); END; * ERROR at line 1: ORA-44004: invalid qualified SQL name ORA-06512: at "SYS.DBMS_STATS", line 13210 ORA-06512: at "SYS.DBMS_STATS", line 13556 ORA-06512: at "SYS.DBMS_STATS", line 13634 ORA-06512: at "SYS.DBMS_STATS", line 13593 ORA-06512: at line 1
We are experiencing tx row lock wait time over hours. There is no blocking session and it seems that the application hangs. What is funny is that when we gather_stats on the tables, those tx row lock wait are being released.