When we upgrade from 10g to 11g oracle upgrades the stats table by EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('OWNER','TABLE'); But When we import from 10g to 11g do we need to upgarde the stats table??
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
During STATS gather running for the table, unknowingly i deleted the old stats using EXEC DBMS_STATS.DELETE_TABLE_STATS. I would like to know will it affect the stats gather job currently running for the table and whether my stats will be gathered successfully.
Along with existing RMAN backups we do Exports - of our DB using and OS User and Oracle Wallet.Of the DB's we have upgraded the Data Pump Directory
Select * from dba_directories; (there are other commands to get this info as well).
I captured screens from the DBUA upgrades, but did not see an option to change this information.Is there a way to feed this information to the install moving forward. IE, ./DBUA -silent ?
Also, anyone tracked the percentage of storage increase from 10.2/11.1 to 11.2.
I have doupts in gathering stats on table. I analyzed one table it took 2 hours first time.. the same table after one week later i analyzed, its got completed within 45 minutes.. I don't know exact reason why i got completed very soon. Is there any specific reason?
I am trying to generate some statistics on tables connected by a dblink. I know with oracle you have table_columns which you can reference and pull some stats from.
Trying to get the column count and record counts for each table connected by a dblink. I have tried these queries below to see if I could see any db properties: (some just to try something different)
select * from "table_owner".table_column@dblink ; select * from "status"@dblink; select /*DRIVING_SITE(a) */ count(*) from @dblink a;
What is the best method to finding this out without spending a lot of time? I have over 30 tables which are with large record sets and would love to learn a faster approach then pulling a sample table and doing a manual count and query for each table to count the rows.
We are running 11g (11.2.0.3)We have a "working table" that is empty at the beginning of the day.Then we start adding rows (insert) with a key column called STATE with a value of 100.At the same time, there are other apps that pickup data in state 100 , process that data and change that state to 200 or 300.There is also another app that pickup data in state 200 , process that data and change that state to 300 or 400.
So in summary, the data on that table is at the beginning empty, then all the rows are in state 100, they slowly move to different states (200, 300, etc) and by the end of the day, they are all in 400.
My question is what would be the best way to collect stats on this table?
I was thinking to create an hourly job to collect stats on that table: exec dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => 'WORK_T
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
I am going to upgrade database from 11.1.0.7 to 11.2.0.3
1) If compatible is set to 10.2.0 in 11.2.0.3, will it work ? 2) If compatible set to maximum level, will it affect our application ? 3) Whether any code related problem occurred after upgrading like PL/SQL codes ?
We are planning to upgrade our database from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi to Oracle Standard Edition 11g . We also have oracle apex installed on Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi database with oracle apex 3.1
No our plan to upgrade the database and oracle apex to 4.2. Since Oracle Enterprise Edition is licensing is very expensive we though of buying standard edition and upgrade to this version.
can we upgrade the oracle database from enterprise edition to latest standard edition?
while doing stats collection weather system takes the backup of current statistics. i think we can specify stattab. but weather it takes stats backup before over writing? I got this requirement as a part of upgrade, i have already gone through export_schema_stats and import_schema stats already. Just trying all other possible options only.
Our UNDO space remains at a high level 85 to 95 percent. We keep adding database files and it doesn't seem to go down significantly. When we do a backup of the system where we shut the database down, it does go down some but then within a week or so it is back up again.
PROMPT CREATE TABLE tst_fetch_vendor_data CREATE TABLE tst_fetch_vendor_data ( vendor_data_seq_no NUMBER NOT NULL, study_seq_no NUMBER NOT NULL, vendor_record_seq_no NUMBER NOT NULL, control_column_seq_no NUMBER NOT NULL, resolved_value VARCHAR2(4000) NULL, original_value VARCHAR2(4000) NULL, transaction_user VARCHAR2(30) NOT NULL, [code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data (select * from vendor_data vd where vd.control_column_seq_no in (select control_column_seq_no from temp_control_column)); dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100, METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
I load a table through sql loader which takes nearly 14 min for 8-9 millions records, once the records complete i run the analyze table compute statics to gather stats and it takes nearly 15 min. is there any ways so that i can reduce the stats timing. the stats collection command runs from other schema not from where the table is residing.
i have a large OLTP database and we are doing table stats copy amount subpartition to save the load on system. while doing an copy default subpartition stats: I see the following error:
SQL> exec DBMS_STATS.COPY_TABLE_STATS('cusms','STATUS_HIST','P_VDEF_10_2012_S100','P_VDEF13_10_2012_S100',force=>true); BEGIN DBMS_STATS.COPY_TABLE_STATS('cusms','STATUS_HIST','P_VDEF_10_2012_S100','P_VDEF13_10_2012_S100',force=>true); END;
* ERROR at line 1: ORA-03113: end-of-file on communication channel
I am trying to come up with a plan for an upgrade that is needed for a server I maintain. It is a Windows 2003 32bit running Oracle 10.2.0.3 on old Hardware. It also has two obscure 3rd party applications that are running on it that directly access the database. These applications are supported by off site consultants.
My initial plan was to Create a Windows 2008 R2 Virtual Server and install the same version of Oracle 10.2.0.3. Using Rman clone the database to the new server. Have the consultants come in and get the applications working. Once everything in the new environment seems to be working fine, run RMAN again and reclone the database to have all the latest data. Then at a later time upgrade the database to 11g 32bit. Virtually no downtime and we could spend all the time we needed getting the applications working and testing the new environment.
The plan is dead right of the bat though because I realize 10.2.0.3 is not supported by Windows 2008 R2. I really did not want to add an Oracle DB upgrade into the mix at the same time. Just because their are so many changes from the old environment to the new that I want to break this down into manageable chunks. And I can maybe get by with 1 day of down time.
So now I am looking at installing 11g on my Virtual Server, Clone the database, upgrade the database, have the consultants come in and get the applications working. All the while we are down. If we run into any problems, which you always do, it just completely blows the schedule.
I'm planning to upgrade a small database (~150GB) from 10.2.0.3 on windows 2003 23bit to 11.2.0.3 RAC on Linux 5.8.The database contains oracle spatial too. A suitable method and link to document to be followed.
I recently performed an upgrade on a new server from oracle 10gr2 to oracle 11gr2 (11.2.0.3).
I take the rman backup from oracle 10g server and restore it on new server where I installed oracle 11gr2.
But on my previous oracle 10gr2 server I enabled the auditing. After doing successful upgrade now when I try to login with any user except sys I receive the following error:
SQL> conn scott/tiger ERROR: ORA-00604: error occurred at recursive SQL level 1 ORA-00904: "OBJ$EDITION": invalid identifier ORA-02002: error while writing to audit trail ORA-00604: error occurred at recursive SQL level 1 ORA-00904: "OBJ$EDITION": invalid identifier
I got the workaround by setting the parameter audit_trail=FALSE (Previous value was DB_EXTENDED) .But I want my auditing to be enabled as per y requirements.
find reference note IDs for DB upgrade from 11.2.0.2 to 11.2.0.3.2, as I am finding only Exadata which I don't want but I want to find for Ebiz database, on OS - Solaris 10 9/10 s10s_u9wos_14a SPARC.
I am trying to upgrade database from 9.2.0.7 to 10.2.0 in my test server. Here are the steps i am planning to do.
1. Export user data of production, Import in test db(ABC) using Destroy=Y Fromuser,touser parameter( this step just to refresh old test data) 2. Install 10g db, Use DBUA Utility of 10g to start upgrade>select 9i database which i want to upgrade 3. follow DBUA instruction ..create datafile paths, init.ora file, tnsnames,listener... same as the existing setup i have of 9i(so i can make replica of 9i in 10g)
question: I have another database(xyz) in test server its also in 9i, I don't want to upgrade that for now. so do i need to use one listener /tnasnames file of 10g and include setting of 9i in that 10g file ?
How to manage init,tnasnames,listener files setting in 2 db versions . Is there anything else i need to do to perform upgrade 9i-10g ? or its all ?
I will install just 10g database will import full export of 9i db. if i can do full import in 10g or should i just go for fromuser, tuser option to import? i believe for that i need to create datafiles, user,grant in new 10g db. I am not sure which option is best to perform 10g upgrade?
Recently we have upgraded our database from Oracle 9.2.0.8 to Oracle 11.1.0.7. But now the new requirement comes that it should be upgraded to Oracle 11.2.0.1
Our OS environment - UNIX AIX 5.3
Since its a development database, we have dataset of Oracle 9i and Oracle 11g1 cold backup. I have a query regarding this. what would be the best way to upgrade, is it from Oracle 9i to Oracle 11g1 or Oracle 11g1 to Oracle 11g2?
My Oracle DB 10.2.0.1 is working fine in Windows XP. Now, I want to upgrade to Oracle 10.2.0.4. I have two Oracle CDs
1) Oracle Database Vault 10g release 2(10.2.0.4.0) for Microsoft Windows DVD& 2) Oracle Enterprise Manager 10g release 4 Grid control (10.2.0.4.0) patch for Microsoft Windows DVD.
But I don't have metalink support. How can I upgrade Oracle DB.
After upgrade 10g to 11g, the below sql is not working. I have issue with connect by, if we use it with subquery it will hang.
select item_code from bom_list_pos where ln_id in (select ln_id from bom_list_nodes start with ln_id IN (select ln_id from bom_used_work_pack where rownum =1) connect by prior ln_id = parent_ln_id)
I ran 10g, able to get it less than minute, but 11g hang. below is explain plan.
I am migrating Oracle windows 32-bit (10g) db to hp-ux machine (11g R2). I have exported all the data from windows machine using dp and plan is to export it to 11g db on UNIX. Before I start the import I need to have similar structure on newly created 11g db on UNIX (tablespaces etc). I am wondering what would be the best option to
1. Generate the code to create all the tablespaces on the new database 2. How can I tell import into different tablespace? 3. I would like import indexes into separate tablespace than data.