Exadata :: Import Data To Exadata
May 6, 2013
I am having an Oracle 11g R2 RAC setup. My database size is 22TB in normal servers. I have checked my database in Exadata X2-2 and found out that the HCC was good and as per my assumption the 22TB can come down to 10TB .
My challenge is i need to transfer the 22TB data to the exadata quater rack which is having space constraint. Is there any way other than export & Import as data export and import will use entire 22TB. Is there any way to transfer the data to Exadata with compressed form from the source server.
View 6 Replies
ADVERTISEMENT
Dec 24, 2012
One of my databases which is running on exadata x2-2 , has been restored to non-exadata machine in order to test few things. I had a sub-partition table in the exadata , compressed for query high. test machine (NOT EXADATA) , after uncompressing this subpartiton tables i am getting the following err message :
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type I have executed the following commands :
alter table crm.cm_ncd modify partition P01_CM_NCD nocompress;
alter table crm.cm_ncd modify partition P02_CM_NCD nocompress;
alter table crm.cm_ncd modify partition P03_CM_NCD nocompress;
[code]...
ERROR at line 1:
ORA-12801: error signaled in parallel query server P005
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type If all the partitions are uncompressed why i am getting this error message ?
View 1 Replies
View Related
Oct 17, 2013
This is the below method which I have used to figure out exadata model.
ex: grep -i MACHINETYPES /opt/oracle.SupportTools/onecommand/databasemachine.xml <MACHINETYPES>X3-2 Half Rack HP</MACHINETYPES>
But for one environment, though it is a full rack but still I am getting result as half rack.
View 3 Replies
View Related
Jul 19, 2013
I am having doubt on the no_index concept in oracle. I am using oracle exadata server.It is basically data warehouse project.I am in the situation to join some tables and get the result set for reporting purpose.
Among the tables, 2 tables having huge count.1st table has more than - 1,000,000,000 rows2nd table has more than - 200,000,000 rows when i join these 2 tables with some small set of tables, it is taking long time (around 20 MIN) to retrieve the result set. The final result set is around 100 rows only.
But, when i force NO_INDEX hint in the same query, is giving the same result in very fast manner(around 5 MIN). Because it is working based on cell smart scan. So, can i force the NO_INDEX hint to all tables? I forced the NO_INDEX hint only the table which is contain 200,000,000 rows not for others.
Query Plan : Normal Query using the the range scan based on the key. No_INDEX Query going full table scan.
View 5 Replies
View Related
Jun 14, 2013
I would like to know if I can create one OS user by ORACLE_HOME database ?
View 3 Replies
View Related
May 12, 2013
Tasks need to performs are
1) Post Installation check list ie: How can I see every things are installed correctly.
2) upgrade the exadata OS version .
3) Upgrade the Database.
4) Test Migration from Oracle database to Exadata.
5) Right now there RAC servers are installed without domain, I need to add domain name to existing RAC.
View 7 Replies
View Related
Oct 21, 2013
Is there any command to identify the disks are HC OR HP ? Indirectly i could find with the following exercise -
$ grep -i MACHINETYPES /opt/oracle.SupportTools/onecommand/databasemachine.xml <MACHINETYPES>X2-2 Half rack</MACHINETYPES> ASMCMD> lsdgState Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files NameMOUNTED NORMAL N 512 4096 4194304 18235392 10597264 2605056 3996104 0 N DATA/MOUNTED NORMAL N 512 4096 4194304 2087680 2085808 298240 893784 0 Y DBFS_DG/MOUNTED NORMAL N 512 4096 4194304 27240192 18871860 3891456 7490202 0 N RECO/ = 45TB >>
[URL] ....... >> Page 5 >> HP SAS disks
View 2 Replies
View Related
Oct 9, 2012
My exadata quarter rack machine has two asm diskgroups, DATA1 with 5TB and RECO with 3TB. I'd like to resize RECO to 1TB and DATA1 to 7TB.
I know ALTER DISKGROUP RESIZE command, but my question is about resize RECO volume from 3 to 1 TB: is it supported by Oracle?? Let me know risks /issues with this resize?
View 2 Replies
View Related
Mar 8, 2013
How can I tell version of exadata machine I'm connected to? Mean whether X2, X3-2. X3-8 ....
example: machinemodel=X2-2 Full rack A file named config.dat is mentioned in this doc [URL]
but I could not find that file on server I'm connected to (DB Host)
Any command, file I could use to get machinemodel?
View 9 Replies
View Related
Oct 10, 2012
Refer a MOS Note for configuring data-guard on Exadata with DR server being and Exadata DBM too.
View 4 Replies
View Related
Nov 12, 2012
How to compress sub partition on exadata, using 'for query high' and pctfree 10 options? I used this statement, but I get only ORA-14160:this physical attribute may not be specified for a table subpartition.
alter table table_name move subpartition subpartition_name PCTFREE 10 compress for query high;
View 9 Replies
View Related
Mar 6, 2013
If I have a Partitioned Table set as COMPRESS FOR QUERY/ARCHIVE HIGH what would be the impact when a row in Partition X has to move to Partition Y?
View 3 Replies
View Related
Apr 22, 2013
Exadata storage indexes stores eight columns in a table.
I am bit confused here, what will happen if I am having a table with 24 columns. In this case how does Storage indexes use these 24 columns.
whether three indexes will be created for this table?.
View 8 Replies
View Related
Mar 20, 2013
My management is wanting to know the serial numbers of all the components of our two Exadata machines: one quarter-rack V2 and one half-rack V1
I can use dmidecode to get the appropriate information for comp nodes and storage cells, but not for the Cisco/Voltaire switch, nor the IB switches. I read MOS 1299791.1 and the thought of asking a DOC operative to pull out the label ‘which could damage the switch’ worries me quite a bit.
Is it still true that we are unable to obtain the serial number for the IB switches and the Cisco/Voltaire switches from the CLI of the switches themselves? Sad face.
View 8 Replies
View Related
Sep 2, 2012
I am trying to enable OLTP compression on tables and at tablespace level for the tables
Steps I am following are:
1. Move indexes to its own tablespace
2. enable OLTP compression at table level:
alter table table_name move compress for OLTP
3. Rebuild indexes
4. Issue I have is what to do with tables with LOB columns
ALTER TABLE lob_table MOVE LOB (LOB_COL) STORE AS (TABLESPACE index_tbsp); -- Is this correct?
5. alter tablespace data_tablespace default compress for OLTP;
I have a question, is the sequence of steps correct. For tables with LOB columns do we needto move lobindex to index tablespace. Beacuse lobsegment and lobindex are created in data tablespace?
View 2 Replies
View Related
Sep 7, 2012
I am trying to gather stats in parallel on a schema in which tables are OLTP compressed
The command I use
BEGIN
DBMS_APPLICATION_INFO.set_module ('Gather Stats', user);
DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'schemaname', DEGREE=>64, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', CASCADE => TRUE, GRANULARITY => 'AUTO');
END;
/
I see blocker with wait event=PX DEQ: Execution Msg
Waiter with wait event= PX Deq: Execute Reply
When I run ASH report all I see is : IPC send completion sync
View 2 Replies
View Related
Jul 31, 2012
we have a quarter rack with 3 cell (storage) servers. Actually I don't know which exact disks we have inside.
How could I check if we have unallocated disk space in the storage server? I would like to do this command line.describe me the steps?
View 6 Replies
View Related
May 1, 2011
I configure exadata + 2 switch between application server and and exadata. And we have problem to connect to switch and load balancing between Ethernet switch to exadata.
We might end up that is 'How about remove ethernet switch and directly connect to exadata?' I don't know which idea comes up with Ethernet gigabit switch between application server and exadata, exadata might be gigabit.
but from application server to ethernet switch is still 100M network, it is only fast in that load, other load is still 100m , it might useless. and about load balancing, exadata is RAC cluster node, exadata is doing load balancing, why we want another load balancing feature of ethernet switch?
Is it good idea to have dual switch between application server and exadata? exadata iti is what might be pro and con of this setting? exadata is handled by oracle support team, but dual switch is only cared by application operation team, so if something wrong with switch, nobody except operation team can manage .
View 1 Replies
View Related
Mar 19, 2013
Despite it being one of the major selling points of Exadata (especially from X3 onwards), I'm struggling to find much information on our usage of the Exadata Smart FlashCache (I'm running RDBMS 11.2.0.2 BP7 on a V2 quarter-rack).
I can verify usage of the FlashCache by checking whether the object has been 'pinned' to the FlashCache via DBA_SEGMENTS and I can check for FlashCache usage by querying gv$sysstat (and even v$mystat), but are there other views that I could use? It seems a bit strange for Oracle not to provide the DBA all that much insight into their usage of this feature...
View 2 Replies
View Related
Apr 23, 2013
Does RMAN backupset (backup to ) on DBFS is supported. I can find ACFS supported but is not mentioned for DBFS. My current customer is thinking backup to DBFS then copy to tape as the interim solution before getting ZFS next year.
View 5 Replies
View Related
Aug 10, 2012
if there is any particular DBFS settings to increase the performance on external table loading currently I have just mounted it with direction just looking for any other ways to improve the reading from the flat file that sits on dbfs on exadata x-2 half rack
View 1 Replies
View Related
Mar 27, 2013
Is there a way to estimate the storage savings for Hybrid Columnar Compression (HCC) in Oracle Exadata x3-2 machine ?
View 5 Replies
View Related
Aug 7, 2012
we can't use the Exadata Plugin for Cloud Control but we need some monitoring of the Cell Servers.Does OS Watcher is the right tool or do we need ADRCI for incidents and so on.
What do have to install and what information do we get.
View 5 Replies
View Related
Jun 27, 2012
We have Exadata -V2 quarter rack with High Performance Disks. We applied EHCC's various compression methods on some of the table's partitions.
Now we are setting up Exadata Expansion Rack - High Capacity Disks. Post implementation, we would be moving older partitions to new Expansion Rack wherein compressed partitions are also included.
In this case, would there be any impact on the compression ratio as the expansion rack is having high capacity disks.
And, moving partition method would be same as it is for non-exadata database i.e. "alter table <table_name> move partition
<partition_name> tablespace <tablespace_name>"
View 2 Replies
View Related
Jul 2, 2012
How to import data from excel(.xls) file to data base table
I have excel sheet(.xls) data details, I neet to upload details to data base table using procedure
excel sheet is not CSV file, so SQL Loader is not using
any alternative solution for this issue
View 3 Replies
View Related
May 24, 2013
I want to import data in a csv file by SQL Loader.
but , I don't want to import some illegal rows when the column 'name' is null
how can I modify the SQL Loader ctrl file?
View 1 Replies
View Related
Sep 17, 2012
I try to transfer data from one database to another one through data pump via SQL Developer (data amount is quite important) exporting several tables. Tables export is doing fine, but I encounter the following error when I import the file (I try data only and data + DDL).
"Exception: ORA-39001: argument value invalid dbms_datapump.get_status(64...=
ORA-39001: argument value invalid
ORA-39000: ....
ORA-31619: ...
The file is in the right place, data pump folder of the new database. User is the same on both base, database version are similar.
View 4 Replies
View Related
Feb 15, 2013
When I do the import the of succeeding dump, I drop the existing schema "SQL> drop user username cascade;" and import dump by " impdp system .... ". I would like to import a dump to an existing instance but only data import and will leave the current packages and other metadata untouched and unchanged on the said existing instance.
1. Do i need to drop user before the import if my requirements are the above?
2. If i need to drop user, what should be script.
3. For the import itself, what parameter should i use?
4. What are the necessaries I need to consider before doing the import.
View 12 Replies
View Related
Feb 11, 2013
I received dmp file , and i want to import only data from that file ?
How can we achieve that in oracle Oracle 11.2.0.3
View 5 Replies
View Related
Sep 11, 2012
Export and import of data in oracle forms...i have created 02 boutons one for export his trigger like this:
eclare
alrt number;
v_directory varchar2(200) := 'c:ackup'; --- that if the C Drive not the Drive that the windows had installed in it.
path varchar2(100):='back_up'
||to_char(sysdate,'dd_mm_yyyy-hh24_mi_ss');
v_exp varchar2(200) := 'exp hamada/hamada2013@orcl file = '
||v_directory
||''
||path
||'.dmp';
[code]....
this code is correct he expot not only the data but also the creation of the table ....for exemple i do export and everything is good until now and i find the .dmp in the folder backup .. but when i deleted all data from my app and try to import this .dmp iit show me error it tell me thet the table phone is already created...just export the data of phone not the creation of table and data ???? or how can i import just the data from this .dmp ??
View 3 Replies
View Related