How To Calculate Sizes Of Archive / Redo In Data Warehouse DB
May 24, 2011
Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.
I need to calculate the redo log volume generated by certain tables. If I have 100 tables in the database I need to know only 25 tables redolog vloume per day. How I can calculate this , Is the log miner useful on this issue.
My data warehouse application involves partitioned tables where indexes are originally unusable on the last partition and only built until the next partition is created. We have a query tool that our users use to query this table that has an option "include not indexed data", which is essentially telling the tool whether to include that last partition in the query. IF this is checked, and they are filtering against on of the indexed fields, there is the potential for an Oracle error stating it tried to use an unusable index so our tool basically builds the query like this:
select ... from ( select ... from table where partition_key < (last usable partition key) union select /*+ NO_INDEX */ ... from table where partition_key >= (last usable partition key) ) where index_field = :value
I have had a difficult time getting reasonable data to test this myself, so I'm asking the question here:
Is Oracle probably pushing that outer filter into the inner individual queries in the UNION? If we were to move the index_field filter into the inner query against each of the individual queries in the union, would it make a difference performance-wise?
I created a data warehouse in oracle 10g n with three Dimension and one cube after that it crates 4 tables . How to use an insert sql statement to insert data in those tables n how to access them.
OLPT DB --> OLPT DB (Physical Standby, active dataguard) --> Data warehouse DB
We only allowed to connect to OLPT DB (Physical Standby, active dataguard) from Data warehouse DB. If there is possibility to use some of Oracle "native" method of data extraction (replication) from OLPT DB (Physical Standby, active dataguard) to Data warehouse DB.
As far as I know we cannot create materialized view log in OLPT DB (Physical Standby, active dataguard) in order to do data replication, but maybe there is some others ways?
I am trying to build a data warehouse for Consumer Price Index and so I have downloaded data from the Bureau of Statistics.It is in excel format and since I am working with Oracle Warehouse Builder I have converted it to .csv file so that I can use it as a data source.
Question1: Is it practical to use single .csv file as a source of data for a data warehouse?
Question2: I have 3 dimensions tables and a fact table.The dimensions are one for the Region(as the date is organized in region,states etc),two is the consumer goods and services (as the data is organized in groups of goods and services, services/goods types) and finally time(year and month),
Now how am I going to do the mapping here?Is it possible to do a one to one mapping here as all data required by the dimensions is located in the .csv file.
We are working on a Data warehouse (ard 50G ) architecture with the following acquired environment:
Single server X3650 M4 Dual CPU ( 16 core in total ) with 48G ram Oracle standard 10g x64 Windows 2008 x64 128 SSD x 8 IBM ServeRAID M5110e SAS/SATA Controller
Due to budget concern, we will be running the App server(Business OBjects 4.0 w/ Tomcat and DB server on the same machine. ) We have a user base of around 30 ppl on the app server.
We intend to have external redundancy using IBM raid card on raid 10 configuration. I wonder what kind of disk config yield better performance if we only have write update in the morning and 95% read for the rest ?
Raid 1 for OS (128SSD x 2 including DB logfile ) Raid 10 for DB server ( 128 SSD x 6 )
I heard ASM provides better disk management but just wonder it increase performance in anyway.
The following code is a stored procedure I plan to use to populate a Data Warehouse dimension using data from two OLTP tables which already exist in my database. Notice that in my cursor select statement, I calculate an attribute using substr and instr, and I also assign a true or false value to a flag using a CASE statement.
CREATE OR REPLACE PROCEDURE populate_product_dimension AS v_Count NUMBER := 0; v_NumRecs NUMBER; /*Declare a cursor on the following query which returns mulitple rows of data from product and price_hist tables*/ [code]....
In my mind, Product_Code is declared correctly in the Cursor declaration Select statement.
what people have set for the SGA and PGA sizes for their larger usage, larger data, databases? I've been seeing one of our warehouses grow both in terms of tables (number and sizes) and users groups querying the database. We're at 96g sga and 10g pga, but was thinking in terms of 1/2 tB machine to pin some larger tables. I know we'll have SSD soon, but am seeing enourmous numbers of reports using windowing and analytic functions and in line sets being created. How big in general do you have your larger systems set to?
I did some google searches about large number of extents and ASSM. I see bits and pieces on the web. This is something I need to look at while testing an application. Not looking to go into 'why' I would use smaller extents, I just want to make sure I have what I need to look for during testing..Issues with massive numbers of extents:
1. DBA_EXTENTS query is really slow. 2. issues truncating tables (due to having to read lots of extents) 3. issues splitting maxvalue partitions and with dropping partitions. 4. if I stay away from ASSM, would this reduce these issues? Are there any other performance issues or other issues I need to know about to check when I do tests?
Any issues with query or insert wait time? The tables that would get smaller events would have thousands of partitions/sub-partitions . Most of these sub-partitions will be rather smaller.I just want to test for a variety of different cases. The 'why' will come out during testing.
For the maximum protection and maximum availability as well as to enable Real Time Apply we need to setup the standby redo log group. My question is what would be the standby redo log group configuration in case if we setup single instance standby database for 2-Node RAC primary database. Do we require a separate standby redo thread for each primary online redo thread or RFS use the Multiple Standby redo group from the same thread to receive the redo changes from both the node ?
We are using oracle 10g. with our code, Currently Oracle partitions are size the same way, each partition is using 10MB for data and 12MB for indexes (with the 6 default indexes); even of very few records are written in the partition.
We create partitions in advance as a part of nightly job with 10 minutes duration.Can some intelligence can be added where based on statistics we can decide the size of partition dynamically? Lot of space is getting wasted because of this reason.
I have read following statement from a link [URL]...
Oracle Database XE can be installed on any size host machine with any number of CPUs (one database per machine), but XE will store up to 4GB of user data, use up to 1GB of memory, and use one CPU on the host machine.
calculation of this 4GB size. how can we calculate this size?
by simply going to DBF file and seeing their size? or by exporting dump and seeing the size of that dump?
I want to replace one redo log on the primary database. actually this redo log is on G:oracleoradata; I want to move it on F:oracleoradata. How to do that cause the same redo log is also on standby database?
I've only successfully duplicate a standby database.
from the alert log
ORA-00313: open failed for members of log group 1 of thread 1 ORA-00312: online log 1 thread 1: 'D:ORA102CTAREDO01.LOG' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified.
[code].....
when I tried to add the online and standby redo log, it error out
SYS@CTA>select logdetail.member, loggroup.group#, loggroup.sequence#, loggroup.archived, loggroup.status lg_status, logdetail.status ld_detail, logdetail.type 2 from v$log loggroup join v$logfile logdetail 3 on loggroup.group# = logdetail.group#; MEMBER -------------------------------------------------------------------------------- GROUP# SEQUENCE# ARC LG_STATUS LD_DETA TYPE ---------- ---------- --- ---------------- ------- -------
[code].....
based on my understanding from [URL] ....
Quote:
As part of the duplicating operation, RMAN automates the following steps:
Creates a control file for the duplicate database
Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived redo logs
Shuts down and starts the auxiliary instance (refer to "Task 4: Start the Auxiliary Instance" for issues relating to client-side versus server-side initialization parameter files)
Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs (except when running DUPLICATE ... FOR STANDBY, in which case RMAN does not open the database) when duplicating for standby database it does not create online redo logs. Duplicating a standby database does not creates online redo logs.
how should I add the online and standby redo logs. If I transfer the redo logs from primary to standby, it always encountered the the following error
Dump file d:ora102ctadumpcta_arc0_3624.trc Tue Sep 13 19:21:53 2011 ORACLE V10.2.0.4.0 - Production vsnsta=0 vsnsql=14 vsnxtr=3 Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the OLAP, Data Mining and Real Application Testing options Windows XP Version V5.1 Service Pack 2
is there a way to set the default sizes for the canvas, object navigator and properties window in forms designer so that they don't maximise when opening them. i tried to set them in the caupref and cagpref files to no avail.
1) If i do changes in table on primary database and if i open standby database in Read-Only mode, i can see those changes immediately only if Real Time Apply is enabled. Am i correct? Database version is 10.2.0.4
2) From 11g, It is possible to apply redo while the standby is open in read only mode. prior to 11g, it was not possible. Right?
3) Should I first cancel Managed Recovery prior to issuing “ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY”?
1. I have export MDL file from owb 10.2 2. I created owb repoisitory on server where I have installed my database 11gr2. 3. I have installed OWB 11gr2 on another machine. 4. Imported MDL file to new repository.
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
I have a RAC system with DR set-up, this is a test environment and it doesn't have any backup, why DR is required but it exist. Since this is a test a lot or archives gets generated and deleting the archives has become a daily job for this server manually.
I want have a script to delete archive logs which is in non-ASM (i.e. filesystem) after ensuring that the archive log has been applied in standby database. If this can done only by RMAN.
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.