I am having doubt on the no_index concept in oracle. I am using oracle exadata server.It is basically data warehouse project.I am in the situation to join some tables and get the result set for reporting purpose.
Among the tables, 2 tables having huge count.1st table has more than - 1,000,000,000 rows2nd table has more than - 200,000,000 rows when i join these 2 tables with some small set of tables, it is taking long time (around 20 MIN) to retrieve the result set. The final result set is around 100 rows only.
But, when i force NO_INDEX hint in the same query, is giving the same result in very fast manner(around 5 MIN). Because it is working based on cell smart scan. So, can i force the NO_INDEX hint to all tables? I forced the NO_INDEX hint only the table which is contain 200,000,000 rows not for others.
Query Plan : Normal Query using the the range scan based on the key. No_INDEX Query going full table scan.
One of my databases which is running on exadata x2-2 , has been restored to non-exadata machine in order to test few things. I had a sub-partition table in the exadata , compressed for query high. test machine (NOT EXADATA) , after uncompressing this subpartiton tables i am getting the following err message :
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type I have executed the following commands : alter table crm.cm_ncd modify partition P01_CM_NCD nocompress; alter table crm.cm_ncd modify partition P02_CM_NCD nocompress; alter table crm.cm_ncd modify partition P03_CM_NCD nocompress;
[code]...
ERROR at line 1: ORA-12801: error signaled in parallel query server P005 ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type If all the partitions are uncompressed why i am getting this error message ?
I am having an Oracle 11g R2 RAC setup. My database size is 22TB in normal servers. I have checked my database in Exadata X2-2 and found out that the HCC was good and as per my assumption the 22TB can come down to 10TB .
My challenge is i need to transfer the 22TB data to the exadata quater rack which is having space constraint. Is there any way other than export & Import as data export and import will use entire 22TB. Is there any way to transfer the data to Exadata with compressed form from the source server.
1) Post Installation check list ie: How can I see every things are installed correctly. 2) upgrade the exadata OS version . 3) Upgrade the Database. 4) Test Migration from Oracle database to Exadata. 5) Right now there RAC servers are installed without domain, I need to add domain name to existing RAC.
My exadata quarter rack machine has two asm diskgroups, DATA1 with 5TB and RECO with 3TB. I'd like to resize RECO to 1TB and DATA1 to 7TB.
I know ALTER DISKGROUP RESIZE command, but my question is about resize RECO volume from 3 to 1 TB: is it supported by Oracle?? Let me know risks /issues with this resize?
How to compress sub partition on exadata, using 'for query high' and pctfree 10 options? I used this statement, but I get only ORA-14160:this physical attribute may not be specified for a table subpartition.
alter table table_name move subpartition subpartition_name PCTFREE 10 compress for query high;
My management is wanting to know the serial numbers of all the components of our two Exadata machines: one quarter-rack V2 and one half-rack V1
I can use dmidecode to get the appropriate information for comp nodes and storage cells, but not for the Cisco/Voltaire switch, nor the IB switches. I read MOS 1299791.1 and the thought of asking a DOC operative to pull out the label ‘which could damage the switch’ worries me quite a bit.
Is it still true that we are unable to obtain the serial number for the IB switches and the Cisco/Voltaire switches from the CLI of the switches themselves? Sad face.
I am trying to enable OLTP compression on tables and at tablespace level for the tables
Steps I am following are:
1. Move indexes to its own tablespace 2. enable OLTP compression at table level: alter table table_name move compress for OLTP 3. Rebuild indexes 4. Issue I have is what to do with tables with LOB columns ALTER TABLE lob_table MOVE LOB (LOB_COL) STORE AS (TABLESPACE index_tbsp); -- Is this correct? 5. alter tablespace data_tablespace default compress for OLTP;
I have a question, is the sequence of steps correct. For tables with LOB columns do we needto move lobindex to index tablespace. Beacuse lobsegment and lobindex are created in data tablespace?
I configure exadata + 2 switch between application server and and exadata. And we have problem to connect to switch and load balancing between Ethernet switch to exadata.
We might end up that is 'How about remove ethernet switch and directly connect to exadata?' I don't know which idea comes up with Ethernet gigabit switch between application server and exadata, exadata might be gigabit.
but from application server to ethernet switch is still 100M network, it is only fast in that load, other load is still 100m , it might useless. and about load balancing, exadata is RAC cluster node, exadata is doing load balancing, why we want another load balancing feature of ethernet switch?
Is it good idea to have dual switch between application server and exadata? exadata iti is what might be pro and con of this setting? exadata is handled by oracle support team, but dual switch is only cared by application operation team, so if something wrong with switch, nobody except operation team can manage .
Despite it being one of the major selling points of Exadata (especially from X3 onwards), I'm struggling to find much information on our usage of the Exadata Smart FlashCache (I'm running RDBMS 11.2.0.2 BP7 on a V2 quarter-rack).
I can verify usage of the FlashCache by checking whether the object has been 'pinned' to the FlashCache via DBA_SEGMENTS and I can check for FlashCache usage by querying gv$sysstat (and even v$mystat), but are there other views that I could use? It seems a bit strange for Oracle not to provide the DBA all that much insight into their usage of this feature...
Does RMAN backupset (backup to ) on DBFS is supported. I can find ACFS supported but is not mentioned for DBFS. My current customer is thinking backup to DBFS then copy to tape as the interim solution before getting ZFS next year.
if there is any particular DBFS settings to increase the performance on external table loading currently I have just mounted it with direction just looking for any other ways to improve the reading from the flat file that sits on dbfs on exadata x-2 half rack
we can't use the Exadata Plugin for Cloud Control but we need some monitoring of the Cell Servers.Does OS Watcher is the right tool or do we need ADRCI for incidents and so on.
What do have to install and what information do we get.
We have Exadata -V2 quarter rack with High Performance Disks. We applied EHCC's various compression methods on some of the table's partitions.
Now we are setting up Exadata Expansion Rack - High Capacity Disks. Post implementation, we would be moving older partitions to new Expansion Rack wherein compressed partitions are also included.
In this case, would there be any impact on the compression ratio as the expansion rack is having high capacity disks.
And, moving partition method would be same as it is for non-exadata database i.e. "alter table <table_name> move partition
I'm very happy about materialize hint and I use it a lot. But is it possible to set some hint indicating that I want to materialize some inline view results (temp table transformation) as IOT (index organized table)?
IOT must have primary key. Ok, could it be all the columns in listing order? That would convenient. I know I can use global temporary with index, but this forces me to split one statement into parts.
I've got SQL code generated by an HP tool to make an update of itself. This code is not modifiable because was execute 'in background' from the tool updater.
The macro operations made by the updater (through sql code) are:
--create a copy of actual table and rename with '_old' suffix --create new table, with indexes and constraints --insert data into new table from old tables
Now, only in one table, when the data were inserted, an ORA-00001: unique constraint (ASSET_SVIL.CFG_CFGSECTIONCFGE) violated, appear.
To insert data is used an insert/select statement with an HINT /* APPEND */. We verify all the data existing in the old table (that is the data to 'migrate') but there aren't duplicate record!
BUT IF WE REMOVE THE HINT /* APPEND */, THE INSERT/SELECT STATEMENT WORKS WITHOUT GENERATE ERRORS!!
I have been told that i should use multiple's of 4 as degree in the parallel hint to get maximum performance, so i am wondering is it true? that i should always use multiples of 4 or i can use any number inside the parallel hint.
I have a SQL query where I am making UNION of two select statements. The table that I am joining in each select statement have indexes defined for those tables.
Now the UNION of the two select statements again in enclosed in an inline view , from which I fetching my final field values.
The select statements inside the inline view returns huge number of row (like 50 million rows).
The whole query fails with time out.
How can I optimize this query further?
Is there a way to pass Oracle Hints so that Oracle uses indexes?
I have been asked to rewrite the following update statement without using the hint BYPASS_UJVC.
l_new_CFT_ID CASHFLOW_TYPE.CFT_ID%TYPE;
if (l_Record > 0) then -- since at least 1 loan was found with the old type, process the actual update update /*+BYPASS_UJVC*/ ( select cfa.CFA_CFT_ID [code]......
I think I am supposed to be using the Merge statement but I am not sure on how to go about it.
I have a query with FULL hint that is behaving in a strange manner. The query fetches around 700000 of data. Sometimes it fetches the data with the hint and sometimes it does not fetch any data with the hint and then I have to remove the hint and have to fetch the data. Below is the query,