Is it possible to gather stats for a schema which its in use. When i try to analyze the tables of a schema it shows that the statistics for that table is locked. So is it possible that instead of analyzing a table one by one , can i go for gathering the Schema stats while the objects of that Schema is still in use ( like DML or select statements being issued on those schema objects) .
DB version : 10.2.0.4
OS version : RHEL 5.8
DB type : RAC
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
The database is 11.2.0.3 on a linux machine. I issued the following command, but the session was a little slow. The table size is about 50 GB and has 3 indexes. I specified "degree=8" for parallel processing.
When gathering statistics on the table , parallel slaves were invoked and gathering statistics on the table has finished fast enough. However, when it goes to gathering statistics on the indexes, only one active session was invoked, and thus "degree=8" option was ignored.
My question is :
Do I need to use dbms_stats.gahter_index_stats instead of "cascade" option in order to gather statistic on the indexes with parallelism?
exec dbms_stats.gather_table_stats(ownname=>'SDPSTGOUT',tabname=>'OUT_SDP_CONTACT_HIS',estimate_percent=>10, degree=>8 , method_opt=>'FOR ALL COLUMNS SIZE 1',Granularity=>'ALL',cascade=>TRUE)
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
I have several databases that i've recently upgraded from 9i to 11g. With all of them, the automatic stats gathering process has worked just fine every night during the maintenance window.
However, i have this other database that i created and it seems that the only stats being gathered are on the sys and system schemas and not the actual schema that holds all of our tables.
I did some searching, but i'm not sure i was using the right search terms, because i came up empty.
BANNER ----------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Solaris: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
I am connect to remote database with a user named 'TEST', this user has dba privileges. I am not able to gather the statistics of neither test schema nor for any table that exists in this schema.
SQL> EXEC dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); BEGIN dbms_stats.gather_schema_stats('TEST', cascade=>TRUE); END; * ERROR at line 1: ORA-44004: invalid qualified SQL name ORA-06512: at "SYS.DBMS_STATS", line 13210 ORA-06512: at "SYS.DBMS_STATS", line 13556 ORA-06512: at "SYS.DBMS_STATS", line 13634 ORA-06512: at "SYS.DBMS_STATS", line 13593 ORA-06512: at line 1
I am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
I am quiet confused with the optimizer collection stats job on 11g. when run the following query i see the statistics enabled.
SQL> select client_name, status from dba_autotask_client;
CLIENT_NAME STATUS ---------------------------------------------------------------- -------- auto optimizer stats collection ENABLED auto space advisor ENABLED sql tuning advisor ENABLED
I can gather statistics manually using DBMS_STATS, but there is no automated gathering of optimizer stats. how can i have system run and collect statistics on a daily bases?
I am on 11.2 on Linux.I am looking into a performance issue. The issue is around 1 particular SQL, involving about 5 tables.I re-gathered statistics on 2 main tables in the query (out of 5 tables).
When I say re-gathered, I first did DBMS_STATS.DELETE_TABLE_STATS and then did DBMS_STATS.GATHER_TABLE_STATS.
Earlier, we had histograms on these tables, which I removed and gathered stats without generating histograms. SQL> select table_name, num_rows, sample_size, last_analyzed from user_tables where 2 table_name in ( 'DETAIL_TABLE','MASTER_TABLE');
2 rows selected.Then ran the SQL again couple of times (actually, that SQL is in a stored procedure, which I ran couple of times).I found this wonderfull SQL on internet, which tells me when the SQL ran and which plan (identified by its hash value) it used. Using this SQL I tried to check if my SQL was run using any different plan, but it used exactly same plan it used before I re-gathered the stats. See the last analyzed time above and begin_interval_time below, same SQL has run before and after stats collection, with same plan_hash_value.
My question is, when I re-gathered stats on 2 tables out of 5 tables in a given SQL, are the plans not flushed out of SGA? I was expecting that, at least a new plan hash value would show up front of my SQL, before and after stats collection.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
Oracle 10g has the feature of automatic stats gathering in this case is it necessary to run DBMS_STATS on tables manually. Does the stats gathered become stale when the auto stat runs ?
I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
PROMPT CREATE TABLE tst_fetch_vendor_data CREATE TABLE tst_fetch_vendor_data ( vendor_data_seq_no NUMBER NOT NULL, study_seq_no NUMBER NOT NULL, vendor_record_seq_no NUMBER NOT NULL, control_column_seq_no NUMBER NOT NULL, resolved_value VARCHAR2(4000) NULL, original_value VARCHAR2(4000) NULL, transaction_user VARCHAR2(30) NOT NULL, [code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data (select * from vendor_data vd where vd.control_column_seq_no in (select control_column_seq_no from temp_control_column)); dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100, METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
Including order number, name and contact details (email, address, telephones) of customer and date the order was placed, order including product code, quantity ordered and cost charged; and Data on each product in the catalog including product code, name, description, unit price and category.
user to input information n number of order items (where 1 ≤ n ≤ 20) as parameters
I am going for a presentation at my University. But the students are not very interested in Oracle or Databases. So I need a title for the Apex topic which gets the other students to visit my presentation.
The presentation should show the basics of apex and how to earn money with this.I thought of:
1) Oracle in the world of Web 2.0
2) How to get online with Oracle
3) What to show if some1 tells you Access is a Database
4) Oracle in a drag and drop environment even for developers
5) Best of Breed: Oracle SQL / PL/SQL, HTML, XML and Java in a nutshell
I developed an Point of sales software using developer 6i and database 10g XE.Now I want to use this software online. Can I do it ? If it is, then how?
I have implemented Rac One node with two machine(dbtest01,dbtest02) in 11.2.0.3 Redhat 5.3 one for online another one for offline. i have created TAF also.
dbtest01 is online and create one query DECLARE i number(36):=1; begin while(i<10000000) loop dbms_output.put_line(i); i:=i+1; end loop; end;
i executed above query for test and i made relocate database using srvctl utility to dbtest02 machine.
session is relocated successfully with new session id...above query getting stop in 25000 itself. i want to make it continue that process..also i tried to find Omotion utility in all the path.. couldn't find it... where will it be.. i want to relocate all the session without losing work.
from application i am getting disconnect after relocation.. application server need to restart to get work again...
One of the Hitachi support guy has suggested to create a separate disk group for Online redo logs. His rationale was that ORLs was write only files and it would be better to put in a separate disk group.
I am using SQL developer 2.1 to migrate tables from Sybase 12 database to oracle 11g. I have used online data move option for moving sybase data into oracle tables, but even after data move is completed not all rows have been moved from sybase tables to corresponding oracle tables. Some rows are missing but still there is no error message being displayed, how to find out what's going wrong.
Till now i was pretty sure that standby redologs are applied to database without any delay, but everything looks that the process is different. i did on primary server table and add some records to this table.
Next i switch to standby and open redologs (with hex editor) to look for this "create table" and "insert" commands. and i succeeded. so this is prove that standby log are supplied by primary database.
Next i cancelled recovery process and open database in read only mode and there was no records in this standby db, moreover there wasn't any table on standby. so i started to be confused, because i suspected that this table and it content will be on standby database.
Next i started recover process again for standby database and do some switch log on primary server. after that i back to standby, cancel recovery process and open database in read only mode. there was table and contents for it.
so my question is:
does standby redologs should be applied online to database or they are only applied after promoting standby database to primary database? it looks like contents only from archivelog is applied to standby database. it is correct?
I've only successfully duplicate a standby database.
from the alert log
ORA-00313: open failed for members of log group 1 of thread 1 ORA-00312: online log 1 thread 1: 'D:ORA102CTAREDO01.LOG' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified.
[code].....
when I tried to add the online and standby redo log, it error out
SYS@CTA>select logdetail.member, loggroup.group#, loggroup.sequence#, loggroup.archived, loggroup.status lg_status, logdetail.status ld_detail, logdetail.type 2 from v$log loggroup join v$logfile logdetail 3 on loggroup.group# = logdetail.group#; MEMBER -------------------------------------------------------------------------------- GROUP# SEQUENCE# ARC LG_STATUS LD_DETA TYPE ---------- ---------- --- ---------------- ------- -------
[code].....
based on my understanding from [URL] ....
Quote:
As part of the duplicating operation, RMAN automates the following steps:
Creates a control file for the duplicate database
Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived redo logs
Shuts down and starts the auxiliary instance (refer to "Task 4: Start the Auxiliary Instance" for issues relating to client-side versus server-side initialization parameter files)
Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs (except when running DUPLICATE ... FOR STANDBY, in which case RMAN does not open the database) when duplicating for standby database it does not create online redo logs. Duplicating a standby database does not creates online redo logs.
how should I add the online and standby redo logs. If I transfer the redo logs from primary to standby, it always encountered the the following error
Dump file d:ora102ctadumpcta_arc0_3624.trc Tue Sep 13 19:21:53 2011 ORACLE V10.2.0.4.0 - Production vsnsta=0 vsnsql=14 vsnxtr=3 Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the OLAP, Data Mining and Real Application Testing options Windows XP Version V5.1 Service Pack 2
Although i have removed current online redo log file in linux os (Oracle Linux),when i type "commit" it says that "commit complete".
Is this fair for this princip?*:" if Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed."*
I think that it can lead to loss of data in some cases..I'm using Oracle 11g R2 on OEL (x64)..
P.S : I haven't multiplexed current ORL group files...
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema