Data Warehouse Optimization

Jun 3, 2013

My data warehouse application involves partitioned tables where indexes are originally unusable on the last partition and only built until the next partition is created. We have a query tool that our users use to query this table that has an option "include not indexed data", which is essentially telling the tool whether to include that last partition in the query. IF this is checked, and they are filtering against on of the indexed fields, there is the potential for an Oracle error stating it tried to use an unusable index so our tool basically builds the query like this:

select ... from (
select ... from table where partition_key < (last usable partition key)
union
select /*+ NO_INDEX */ ... from table where partition_key >= (last usable partition key)
)
where
index_field = :value

I have had a difficult time getting reasonable data to test this myself, so I'm asking the question here:

Is Oracle probably pushing that outer filter into the inner individual queries in the UNION? If we were to move the index_field filter into the inner query against each of the individual queries in the union, would it make a difference performance-wise?

View 2 Replies


ADVERTISEMENT

Data Warehouse - How To Insert / Access Data In The Tables

May 30, 2011

I created a data warehouse in oracle 10g n with three Dimension and one cube after that it crates 4 tables . How to use an insert sql statement to insert data in those tables n how to access them.

View 7 Replies View Related

Move Data From Physical StandBy To Data Warehouse?

Oct 1, 2012

We have this archicture:

OLPT DB --> OLPT DB (Physical Standby, active dataguard) --> Data warehouse DB

We only allowed to connect to OLPT DB (Physical Standby, active dataguard) from Data warehouse DB. If there is possibility to use some of Oracle "native" method of data extraction (replication) from OLPT DB (Physical Standby, active dataguard) to Data warehouse DB.

As far as I know we cannot create materialized view log in OLPT DB (Physical Standby, active dataguard) in order to do data replication, but maybe there is some others ways?

View 3 Replies View Related

Building Data Warehouse From Single Flat File

May 2, 2012

I am trying to build a data warehouse for Consumer Price Index and so I have downloaded data from the Bureau of Statistics.It is in excel format and since I am working with Oracle Warehouse Builder I have converted it to .csv file so that I can use it as a data source.

Question1: Is it practical to use single .csv file as a source of data for a data warehouse?

Question2: I have 3 dimensions tables and a fact table.The dimensions are one for the Region(as the date is organized in region,states etc),two is the consumer goods and services (as the data is organized in groups of goods and services, services/goods types) and finally time(year and month),

Now how am I going to do the mapping here?Is it possible to do a one to one mapping here as all data required by the dimensions is located in the .csv file.

View 4 Replies View Related

How To Calculate Sizes Of Archive / Redo In Data Warehouse DB

May 24, 2011

Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.

Is there a formula to calculate the size?

View 1 Replies View Related

Performance Tuning :: Best Disk Config For SME Scale Data Warehouse

May 15, 2013

We are working on a Data warehouse (ard 50G ) architecture with the following acquired environment:

Single server X3650 M4 Dual CPU ( 16 core in total ) with 48G ram
Oracle standard 10g x64
Windows 2008 x64
128 SSD x 8
IBM ServeRAID M5110e SAS/SATA Controller

Due to budget concern, we will be running the App server(Business OBjects 4.0 w/ Tomcat and DB server on the same machine. ) We have a user base of around 30 ppl on the app server.

We intend to have external redundancy using IBM raid card on raid 10 configuration. I wonder what kind of disk config yield better performance if we only have write update in the morning and 95% read for the rest ?

Raid 1 for OS (128SSD x 2 including DB logfile )
Raid 10 for DB server ( 128 SSD x 6 )

I heard ASM provides better disk management but just wonder it increase performance in anyway.

View 2 Replies View Related

SQL & PL/SQL :: Debugging Stored Procedure / Populate Data Warehouse Dimension

Nov 20, 2011

The following code is a stored procedure I plan to use to populate a Data Warehouse dimension using data from two OLTP tables which already exist in my database. Notice that in my cursor select statement, I calculate an attribute using substr and instr, and I also assign a true or false value to a flag using a CASE statement.

CREATE OR REPLACE PROCEDURE populate_product_dimension
AS
v_Count NUMBER := 0;
v_NumRecs NUMBER;
/*Declare a cursor on the following query which returns mulitple rows of data from product and price_hist tables*/
[code]....

In my mind, Product_Code is declared correctly in the Cursor declaration Select statement.

View 10 Replies View Related

SQL & PL/SQL :: Optimization - Reuse Of A Set?

Mar 18, 2010

I have the following situation. There are two selects in the which look like this

UPDATE TABLE_B B
SET (
target

[Code]....

DB: Oracle 10i

MyFunction is relatively expensive and the second select works on a subset of the data considered in the first one (because of a.col2 > b.col3). For this reason I am looking for a way to do the job in only one update statement and compute MyFunction (a.x) only once per row.

View 1 Replies View Related

Performance Analysis And Optimization Tools

Jan 27, 2011

I am looking for some tools for Performance analysis and optimization for Oracle. For now I looked over Spotlight, Ignite and Embarcadero DB Optimizer.

View 1 Replies View Related

Long Select Operate On 5 Tables - Optimization?

Sep 4, 2013

I have long select which operate on 5 tables and has a lot of conditions in where clause (many combinations of values of just a few columns). Does reducing of those conditions could improve performance or just has a small impact?

I think if I have a lot of conditions on the same column, it don't take a lot of time to check them because values are in memory.

View 3 Replies View Related

Backup & Recovery :: RMAN Optimization Parameter

Jul 29, 2011

I have a Tablespace DP_TS_LOBS and i stores only secure file blobs in it. DP_TS_LOBS has only one datafile "LOB1.dbf" . I have RAMN optimization parameter on. I also have the backup of DP_TS_LOBS tablespace.

RMAN> backup tablespace dp_ts_lobs;

after storing few blobs the data file "LOB1.DBF" got full and added a new data file "LOB2.DBF". Few more BLOB's were stored to DP_TS_LOBS tablespace. Then I tried to backup the DP_TS_LOBS table space again, as expected both the data file were backed up as there were changes to both the datafile since last backup.

I was expecting with Optimization Parameter on, if RMAN has a datafile with same DBID, checkpoint SCN, creation SCN,RESETLOGS SCN,time already in the backup and data file is offline RMAN wont backup that data file again.

After few minutes with out performing any activities in the database, I put the "LOB1.DBF" to offline and executed the DP_TS_LOBS table space backup again but I still see RMAN backing up the both the data files.

I was closely monitoring the "CHECKPOINT_CHANGE#" and "CHECKPOINT_TIME" columns of v$datafile, the values for those columns changes for data files "LOB1.DBF" and "LOB2.DBF" when I execute backup tablespace command on DP_TS_LOBS even thought there was not any activity in the database to add or update or delete blob that are stored in DP_TS_LOBS table space.

Initially I thought it can be due to some unwritten committed blocks of dp_ts_lobs table space in the memory that got written to the datafiles while executing backup tablespace command and that why i saw the change in checkpoint scn and checkpoint time , but it keeps happening every time when I tried multiple times.

View 2 Replies View Related

Oracle Warehouse Builder 11.2 - Deploy To New Repository?

Feb 24, 2012

I have a big probleme with owb 11.2.

1. I have export MDL file from owb 10.2
2. I created owb repoisitory on server where I have installed my database 11gr2.
3. I have installed OWB 11gr2 on another machine.
4. Imported MDL file to new repository.

How I deploy to my new repository?

View 2 Replies View Related

Query Optimization On Join With A View On Huge Table?

Jun 22, 2011

I have this table

create table ACTIONARI_ARH
(
actionar_id NUMBER(10) not null,
id VARCHAR2(20) not null,
id_2 VARCHAR2(20),
tip VARCHAR2(1),
nume VARCHAR2(100),
prenume VARCHAR2(100),
adresa VARCHAR2(200),

[code]....

and this view

CREATE OR REPLACE VIEW ACTIONARI AS
SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT"
FROM (
select

[code]....

The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.

The view is about getting only the most recent info for one person (actionar_id).

if i run a

a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.

b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.

my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like

create table SMALL_TABLE
(
actionar_id NUMBER(10) not null,
......
);

and if i run

select *
from SMALL_TABLE s
join actionari a
on a.actionar_id = s.actionar_id;

it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.

View 2 Replies View Related

Difference Between Backup Optimization And Change Tracking Block?

Sep 5, 2012

I am studying about these two technologies and the only difference that I found was that optimization doesn't backup the duplicated archivedlogs.

If both ignore not changed blocks, what is more effective?

The optimization backup also has some tracking file?

View 1 Replies View Related

RMAN :: Optimization - Ensure Skip Those Files Already Taken Backup

Feb 28, 2013

I would like to know that if I enable backup Optimization on then incremental full backup skip any files which was earlier backup? Because we may know that backup Optimization on ensure skip those files which are already taken backup.

Database : oracle 10g 10.2.0.3 .

View 4 Replies View Related

SQL & PL/SQL :: Handling Circular Data In Oracle / Get Disjoint Sets Of Data Connected By 2 Values

Sep 29, 2011

I have this table :

column1 column2
--------- ---------
value1 value2
value1 value3
value2 value4
value3 value7
value7 value1
value8 value9

What I was trying to retrieve is something like that:

Quote:
output_column
---------------
value1, value2, value3, value4, value7
value8, value9

I don´t care about the order of the values in the row. In other words, I want to get disjoint sets of data connected by any of both values.Every pair in the input table is unique.

I have seen in the web that it is possible to do using connect by and hierarchical retrieving but I've been trying to make a lot of combinationts and I can reproduce the output.

View 2 Replies View Related

Any Data Compression Wizards That Automatically Suggest Level At Which Data Should Be Compressed

Jun 29, 2011

whether Oracle has any capability of automatically checking which lossless compression algorithm it should apply by analyzing a data stream on data load? Does Oracle have any compression advisors/wizards that would make recommendations as to type and level of compression?

View 3 Replies View Related

Store Data In CLOD Data Type - How To Create A Unique Index

May 20, 2013

We have been recommended to store data in CLOD data type.

Sample data: 1:2:2000000:20000:4455:000099:444:099999:....etc it will grow to a large number.

We want to create a Unique index, for functional reason. Is it advised to create a unique index on a CLOB datatype?

View 2 Replies View Related

Data Guard :: Unable To Get Data Of Primary In Standby Database (dataguard)

Jan 16, 2013

i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..

Primary alert log

Fatal NI connect error 12514, connecting to:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated)))
VERSION INFORMATION:
TNS for 64-bit Windows: Version 11.1.0.6.0 - Production

[code]....

View 1 Replies View Related

SQL & PL/SQL :: Remove Data Recently Loaded But Not Disturbing Existing Original Data?

Jun 24, 2010

I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from various source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.

How do I remove just that data which I have loaded recently, and do not disturb the original data it already has.

Need to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just don't want to change the existing system, as lot of users use the system.

View 8 Replies View Related

Server Utilities :: Transferring Changed Data From Database A To B By Data Pump?

Apr 1, 2011

I have database A (Working in Live environment) and Database B copy of Database (Not live) I have Restored whole database (A) RMAN backup file on Database (B) Previous week now i don't want to change anything in any schema and want to import only updated and new records in the table in Database B

There are around 20 schema If for example i have everything in new database B all required database objects like Procedure,functions, packages with indexes in all tables and data in tables, i just want to add new data and updated data.

IF i do following in source database

expdp directory=dpump_dir dumpfile=table_data.dmp content=data_only schemas=ACCMAIN,HRMAIN,..... include=TABLE

AND Import in destination database B, will it add new data and update existing one in table and not touch the table structure and indexes.

View 5 Replies View Related

Data Guard :: Changing Dbname / Sid / Data / Control File Locations?

Nov 5, 2010

I want to change dbname,sid,data/control file locations in operational dataguard setup i plan to follow as below

1)shutdown primary and standby (stop managed recovery)

2)change db name in init.ora of primary and standby change database name control file location

3)create control file for primary from trace(script) make changes for db name and file locations

4)mount and open primary database

5)create standby control file

6) transfer standby control file to standby

7) mount stand by database and start manage recovery

If this steps will error free do i need to follow any thing additional to this or what is other best way for this or its not possible at all

View 1 Replies View Related

SQL & PL/SQL :: Delete Previously Stored Data And Enter Only Current Data In A Table?

Jul 24, 2012

how to insert data in a table by deleting previous entered data and only inserting current data like:

CREATE TABLE test
(
name VARCHAR2(20),
id NUMBER
)
INSERT INTO test VALUES ('aaa',5500);

[code]....

I got two rows. now when I do insert statement I want to delete the previously stored data and only insert the current data like:

INSERT INTO test VALUES ('aaa',8);
INSERT INTO test VALUES ('aaa',9);

it must show aaa,8 and aaa,9 bt not the previous values.

NOTE: we can not do sth like: update set... where id = ... becoz the values are dynamic.

View 4 Replies View Related

ODP.NET :: Cannot Find Oracle Data Source In Entity Data Model Wizard

Dec 19, 2012

I'm trying to add edmx file in my project for first time. I want to choose the oracle provider ODP.net but cannot find Oracle in the data source list. I have oracle 11g installed , odp and odt installed and can access it from the solution as well. I saw the Oracle listed under data source when I tried to connect the solution to the database through server explorer. The solution is connected to Oracle database through ODP.

View 8 Replies View Related

Data Guard :: Buffer Busy Waits On UNDO Data In Active DG

Feb 24, 2013

Oracle Version: 11.1.0.7.0
Active Dataguard

Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour

Snapshot       Snap Id     Snap Time      Sessions Curs/Sess Comment
~~~~~~~~    ---------- ------------------ -------- --------- -------------------
Begin Snap:      18611 21-Feb-13 22:00:02      236       2.2
  End Snap:      18613 21-Feb-13 23:00:02      237       2.1
   Elapsed:               60.00 (mins)
[code]...

Why there could sudden spike of demand on UNDO data in Active Data Guard ?

View 2 Replies View Related

Forms :: First Form Clear Previous Data And Then Populate Data From Table

May 7, 2012

when i press when button pressed trigger, i want first the form will delete all the previous data and then populate the data from the table, that's why i used clear_block first, but this clear_code is not working here. my coding is given below

go_block('show');
clear_block(NO_VALIDATE);
declare
cursor c1 is select *
from qtr_demand order by 1;
begin
[code]..........

View 26 Replies View Related

Server Utilities :: Data Pump For Exporting And Importing Extremely Large Data Files

Sep 24, 2010

I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?

View 4 Replies View Related

Performance Tuning :: Split Data Separated By Comma / Then Create Collection With Data Mapped From Other Columns

Sep 25, 2013

DB Used : Oracle 10g.

A table X : NUM, INST are column names

NUM ----- INST

1234 ----- 23,22,21,78
2235 ----- 20,7,2,1
1298 ----- 23,22,21,65,98
9087 ----- 20,7,2,1

-- Based upon requirement :

1) Split values from "INST" Column : suppose 23
2) Find all values from "NUM" column for above splitted value i.e 23 ,

Eg:

For Inst : 23 ,
It's corresponding "NUM" values are : 1234,1298

3) Save these values into

A table Y : INST, NUM are column names.

INST NUM
23 1234,1298

1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.

2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..

View 4 Replies View Related

Precompilers, OCI & OCCI :: Data Lost After Adding Null Terminator For VARCHAR Data Received From Database

Dec 13, 2010

We are migrating a proc application as described below.

Old Env: UNIX
Old DB: Oracle 8i

New Env: Linux
New DB: Oracle 11g

New modules are successfully compiled in Linux environment. But we are facing issues in writing the output of VARCHAR datatype to a file.

find below the extract of code.
EXEC SQL BEGIN DECLARE SECTION;
varchar mcolmnvarchar[4];
EXEC SQL END DECLARE SECTION;

EXEC SQL DECLARE crs CURSOR FOR
SELECT NVL(colmn,' ') FROM table1

memset(mcolmnvarchar.arr,'�',4); //Was added for only Linux migration. Not present in unix env.

EXEC SQL FETCH c1 INTO :mcolmnvarchar;

cout << "Data at Stage one"<< mcolmnvarchar << endl;
mcolmnvarchar.arr[mcolmnvarchar.len]='�';
cout << "Data at Stage two"<< mcolmnvarchar << endl;
fprintf(fptr,"%-4s",mcolmnvarchar.arr);

Above code works absolutely fine in Unix env with Oracle 8i. But with Linux env & Oracle 11g it is not working. No compilation or run time errors. Data at Stage one prints the output of database properly. But after null terminator code, Data at Stage two statement prints without any value. Value is lost after null terminator code.

View 7 Replies View Related

Transfer Data To SQL Server 2008 / Arabic Data Showing

Jan 15, 2013

I am trying to transfer data from Oracle database table into another table resides in a SQL server 2008. A database link is already set, and data can be selected/queried using

select * from a_table@db_link

the thing is when trying to insert Arabic data from Oracle to a SQL server table

insert into a_table@db_link values(
'N''some data in Arabic'' ,
'11-oct-00',
'some data in English'
);
commit;

its inserted alright, but when we try to display the data it shows like this instead of displaying Arabic, the english data is alright, but also date show as garbage?!!

when trying to insert arabic data directly through SQL server its fine though.I suggested that we transfer data through ODBC to flat files like Acess and then to SQL server but the team rejected it since they're going to do it daily and the data is huge( we are talking more than 28000 records).

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved