My data warehouse application involves partitioned tables where indexes are originally unusable on the last partition and only built until the next partition is created. We have a query tool that our users use to query this table that has an option "include not indexed data", which is essentially telling the tool whether to include that last partition in the query. IF this is checked, and they are filtering against on of the indexed fields, there is the potential for an Oracle error stating it tried to use an unusable index so our tool basically builds the query like this:
select ... from (
select ... from table where partition_key < (last usable partition key)
union
select /*+ NO_INDEX */ ... from table where partition_key >= (last usable partition key)
)
where
index_field = :value
I have had a difficult time getting reasonable data to test this myself, so I'm asking the question here:
Is Oracle probably pushing that outer filter into the inner individual queries in the UNION? If we were to move the index_field filter into the inner query against each of the individual queries in the union, would it make a difference performance-wise?
I created a data warehouse in oracle 10g n with three Dimension and one cube after that it crates 4 tables . How to use an insert sql statement to insert data in those tables n how to access them.
OLPT DB --> OLPT DB (Physical Standby, active dataguard) --> Data warehouse DB
We only allowed to connect to OLPT DB (Physical Standby, active dataguard) from Data warehouse DB. If there is possibility to use some of Oracle "native" method of data extraction (replication) from OLPT DB (Physical Standby, active dataguard) to Data warehouse DB.
As far as I know we cannot create materialized view log in OLPT DB (Physical Standby, active dataguard) in order to do data replication, but maybe there is some others ways?
I am trying to build a data warehouse for Consumer Price Index and so I have downloaded data from the Bureau of Statistics.It is in excel format and since I am working with Oracle Warehouse Builder I have converted it to .csv file so that I can use it as a data source.
Question1: Is it practical to use single .csv file as a source of data for a data warehouse?
Question2: I have 3 dimensions tables and a fact table.The dimensions are one for the Region(as the date is organized in region,states etc),two is the consumer goods and services (as the data is organized in groups of goods and services, services/goods types) and finally time(year and month),
Now how am I going to do the mapping here?Is it possible to do a one to one mapping here as all data required by the dimensions is located in the .csv file.
Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.
We are working on a Data warehouse (ard 50G ) architecture with the following acquired environment:
Single server X3650 M4 Dual CPU ( 16 core in total ) with 48G ram Oracle standard 10g x64 Windows 2008 x64 128 SSD x 8 IBM ServeRAID M5110e SAS/SATA Controller
Due to budget concern, we will be running the App server(Business OBjects 4.0 w/ Tomcat and DB server on the same machine. ) We have a user base of around 30 ppl on the app server.
We intend to have external redundancy using IBM raid card on raid 10 configuration. I wonder what kind of disk config yield better performance if we only have write update in the morning and 95% read for the rest ?
Raid 1 for OS (128SSD x 2 including DB logfile ) Raid 10 for DB server ( 128 SSD x 6 )
I heard ASM provides better disk management but just wonder it increase performance in anyway.
The following code is a stored procedure I plan to use to populate a Data Warehouse dimension using data from two OLTP tables which already exist in my database. Notice that in my cursor select statement, I calculate an attribute using substr and instr, and I also assign a true or false value to a flag using a CASE statement.
CREATE OR REPLACE PROCEDURE populate_product_dimension AS v_Count NUMBER := 0; v_NumRecs NUMBER; /*Declare a cursor on the following query which returns mulitple rows of data from product and price_hist tables*/ [code]....
In my mind, Product_Code is declared correctly in the Cursor declaration Select statement.
I have the following situation. There are two selects in the which look like this
UPDATE TABLE_B B SET ( target
[Code]....
DB: Oracle 10i
MyFunction is relatively expensive and the second select works on a subset of the data considered in the first one (because of a.col2 > b.col3). For this reason I am looking for a way to do the job in only one update statement and compute MyFunction (a.x) only once per row.
I am looking for some tools for Performance analysis and optimization for Oracle. For now I looked over Spotlight, Ignite and Embarcadero DB Optimizer.
I have long select which operate on 5 tables and has a lot of conditions in where clause (many combinations of values of just a few columns). Does reducing of those conditions could improve performance or just has a small impact?
I think if I have a lot of conditions on the same column, it don't take a lot of time to check them because values are in memory.
I have a Tablespace DP_TS_LOBS and i stores only secure file blobs in it. DP_TS_LOBS has only one datafile "LOB1.dbf" . I have RAMN optimization parameter on. I also have the backup of DP_TS_LOBS tablespace.
RMAN> backup tablespace dp_ts_lobs;
after storing few blobs the data file "LOB1.DBF" got full and added a new data file "LOB2.DBF". Few more BLOB's were stored to DP_TS_LOBS tablespace. Then I tried to backup the DP_TS_LOBS table space again, as expected both the data file were backed up as there were changes to both the datafile since last backup.
I was expecting with Optimization Parameter on, if RMAN has a datafile with same DBID, checkpoint SCN, creation SCN,RESETLOGS SCN,time already in the backup and data file is offline RMAN wont backup that data file again.
After few minutes with out performing any activities in the database, I put the "LOB1.DBF" to offline and executed the DP_TS_LOBS table space backup again but I still see RMAN backing up the both the data files.
I was closely monitoring the "CHECKPOINT_CHANGE#" and "CHECKPOINT_TIME" columns of v$datafile, the values for those columns changes for data files "LOB1.DBF" and "LOB2.DBF" when I execute backup tablespace command on DP_TS_LOBS even thought there was not any activity in the database to add or update or delete blob that are stored in DP_TS_LOBS table space.
Initially I thought it can be due to some unwritten committed blocks of dp_ts_lobs table space in the memory that got written to the datafiles while executing backup tablespace command and that why i saw the change in checkpoint scn and checkpoint time , but it keeps happening every time when I tried multiple times.
1. I have export MDL file from owb 10.2 2. I created owb repoisitory on server where I have installed my database 11gr2. 3. I have installed OWB 11gr2 on another machine. 4. Imported MDL file to new repository.
create table ACTIONARI_ARH ( actionar_id NUMBER(10) not null, id VARCHAR2(20) not null, id_2 VARCHAR2(20), tip VARCHAR2(1), nume VARCHAR2(100), prenume VARCHAR2(100), adresa VARCHAR2(200),
[code]....
and this view
CREATE OR REPLACE VIEW ACTIONARI AS SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT" FROM ( select
[code]....
The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.
The view is about getting only the most recent info for one person (actionar_id).
if i run a
a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.
b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.
my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like
create table SMALL_TABLE ( actionar_id NUMBER(10) not null, ...... );
and if i run
select * from SMALL_TABLE s join actionari a on a.actionar_id = s.actionar_id;
it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.
I would like to know that if I enable backup Optimization on then incremental full backup skip any files which was earlier backup? Because we may know that backup Optimization on ensure skip those files which are already taken backup.
I don´t care about the order of the values in the row. In other words, I want to get disjoint sets of data connected by any of both values.Every pair in the input table is unique.
I have seen in the web that it is possible to do using connect by and hierarchical retrieving but I've been trying to make a lot of combinationts and I can reproduce the output.
whether Oracle has any capability of automatically checking which lossless compression algorithm it should apply by analyzing a data stream on data load? Does Oracle have any compression advisors/wizards that would make recommendations as to type and level of compression?
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated))) VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.6.0 - Production
I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from various source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.
How do I remove just that data which I have loaded recently, and do not disturb the original data it already has.
Need to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just don't want to change the existing system, as lot of users use the system.
I have database A (Working in Live environment) and Database B copy of Database (Not live) I have Restored whole database (A) RMAN backup file on Database (B) Previous week now i don't want to change anything in any schema and want to import only updated and new records in the table in Database B
There are around 20 schema If for example i have everything in new database B all required database objects like Procedure,functions, packages with indexes in all tables and data in tables, i just want to add new data and updated data.
I'm trying to add edmx file in my project for first time. I want to choose the oracle provider ODP.net but cannot find Oracle in the data source list. I have oracle 11g installed , odp and odt installed and can access it from the solution as well. I saw the Oracle listed under data source when I tried to connect the solution to the database through server explorer. The solution is connected to Oracle database through ODP.
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment ~~~~~~~~ ---------- ------------------ -------- --------- ------------------- Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2 End Snap: 18613 21-Feb-13 23:00:02 237 2.1 Elapsed: 60.00 (mins) [code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
when i press when button pressed trigger, i want first the form will delete all the previous data and then populate the data from the table, that's why i used clear_block first, but this clear_code is not working here. my coding is given below
go_block('show'); clear_block(NO_VALIDATE); declare cursor c1 is select * from qtr_demand order by 1; begin [code]..........
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
1) Split values from "INST" Column : suppose 23 2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 , It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM 23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
We are migrating a proc application as described below.
Old Env: UNIX Old DB: Oracle 8i
New Env: Linux New DB: Oracle 11g
New modules are successfully compiled in Linux environment. But we are facing issues in writing the output of VARCHAR datatype to a file.
find below the extract of code. EXEC SQL BEGIN DECLARE SECTION; varchar mcolmnvarchar[4]; EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE crs CURSOR FOR SELECT NVL(colmn,' ') FROM table1
memset(mcolmnvarchar.arr,'�',4); //Was added for only Linux migration. Not present in unix env.
EXEC SQL FETCH c1 INTO :mcolmnvarchar;
cout << "Data at Stage one"<< mcolmnvarchar << endl; mcolmnvarchar.arr[mcolmnvarchar.len]='�'; cout << "Data at Stage two"<< mcolmnvarchar << endl; fprintf(fptr,"%-4s",mcolmnvarchar.arr);
Above code works absolutely fine in Unix env with Oracle 8i. But with Linux env & Oracle 11g it is not working. No compilation or run time errors. Data at Stage one prints the output of database properly. But after null terminator code, Data at Stage two statement prints without any value. Value is lost after null terminator code.
I am trying to transfer data from Oracle database table into another table resides in a SQL server 2008. A database link is already set, and data can be selected/queried using
select * from a_table@db_link
the thing is when trying to insert Arabic data from Oracle to a SQL server table
insert into a_table@db_link values( 'N''some data in Arabic'' , '11-oct-00', 'some data in English' ); commit;
its inserted alright, but when we try to display the data it shows like this instead of displaying Arabic, the english data is alright, but also date show as garbage?!!
when trying to insert arabic data directly through SQL server its fine though.I suggested that we transfer data through ODBC to flat files like Acess and then to SQL server but the team rejected it since they're going to do it daily and the data is huge( we are talking more than 28000 records).