Performance Tuning :: Moving Tables To Encrypted Tablespace?
Aug 25, 2013
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
View 2 Replies
ADVERTISEMENT
Dec 19, 2012
"All data you create in this tablespace will be encrypted using an AES256 encryption key. You cannot encrypt an existing tablespace. To encrypt data, first create an encrypted tablespace, then use alter table move, CTAS or datapump import to move your data into the encrypted space. Remember to drop the old tablespace BUT not including datafiles. Use an OS schred program to remove the old datafile. If you are on ASM you may use the including datafiles option since you can’t schred files from the OS inside an ASM instance."
But i want to know why we should NOT drop the including datafiles, when dropping tablespace (so 'drop tablespace my_tbs including contents and datafiles'). So what option should we use when dropping tablespace?
Why we should use OS capabilities to remove the datafiles?
What happens if i remove the datafile when i drop the tablespace?
View 13 Replies
View Related
Mar 29, 2013
I created an encrypted tablespace for testing. I later dropped it but don't remember if I specified "including contents and datafiles". The tablespace was empty and there are no datafiles for it. However, the information for this dropped tablespace still shows up in v$encrypted_tablespaces. How do I get that lingering information removed?
View 3 Replies
View Related
Dec 22, 2010
Allwasy temp tablespace shows 100% full, even though database bounce temp is not cleared again it shows 100% full.Is their to Tune this issue.
View 1 Replies
View Related
Apr 26, 2012
If my tablespace goes beyond 80% I should get a email from Unix crontab.
1)Warning for tablespace 80% full.
2)critical for tablespace 90%full..
I need the script for Oracle tablespace with 87% and shell script.
Do I need 2 scripts? Is there anything ready available (Oracle 10g).
View 4 Replies
View Related
Jun 29, 2011
Using Golden Gate to replicate a database (Encrypted Tablespace, Oracle 11.2.0.1, Windows 2008) to a different database server (No Encrypted Tablespace Oracle 11.1 Linux)
Following error goldengate report
ERROR OGG-01771 DBOPTIONS DECRYPTPASSWORD must be used to decrypt TSE data. Use TRANLOGOPTION IGNORETSERECORDS if you do not need to capture any tables that are in an encrypted tablespace.
How use it
GGSCI> ENCRYPT PASSWORD "shared key"
Add an entry to the Extract parameter file to decrpt the new shared password
DBOPTIONS DECRYPTPASSWORD "SHARED KEY"
View 2 Replies
View Related
Nov 17, 2011
I am getting temp tablespace error "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" for the following code.
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no,
tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1, vendor_data vd2, vendor_data vd3,
(SELECT rownumber, MAX (DECODE (control_column_seq_no, 91150, original_value, NULL)) AS value1,
[code]...
Right now used tables has the following number of records-
SELECT COUNT(*) FROM vendor_data --292890442
SELECT COUNT(*) FROM temp_vendor_data --0
SELECT COUNT(*) FROM temp_vendor_record --0
This query is part of an application, but consuming too much of temporary tablespace (68 GB allocated). I found it out by using query below:
select * from v$session a, v$sql b
where a.sql_id=b.sql_id
and status = 'ACTIVE'
I am not sure, why this problem is occuring.
View 5 Replies
View Related
Nov 7, 2011
I am receiving this error in production databases...There are 2 probable extent failures for tablespace
View 14 Replies
View Related
Mar 7, 2012
we have a situation where both undo tablespaces were almost filled i.e UNDOTBS1 99% and UNDOTBS2 100% filled so i add data files to it and then i found a lot of blocking session and was just killing them through EM then i stop my front end listener and also down the service, now i don't have any blocking session but on EM a big WAIT is coming. alert log shows nothing serious, it was showing deadlock but now it is over as well.
View 8 Replies
View Related
Nov 25, 2011
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
View 14 Replies
View Related
Jan 16, 2012
I have to do the optimization of a query that has the following characteristics:
- Takes 3 hours to process
- Performs the inner join with 30 tables
- Produces an output of 280 million records with 450 fields
First of all it is not feasible to make 30 updates (one for each table) to 280 million records.
The best solution that I had found so far was to create 3 temporary tables, where each of them to do the join with 1/3 of the 30 tables, and in the end I make the join between the main table and these three tables temporary.
I know that you will ask (or maybe not) to the query and samples, but it is impossible to create 30 examples.
how to optimize this type of querys that perform the join with multiple tables and produce a large output with (too) many columns.
View 15 Replies
View Related
Jan 13, 2012
We have a procedure, which do truncate to some of the tables. Most of the time it finished in short of spam of time. But from last few days, it is taking much longer time.
where should i start the investigation.
View 4 Replies
View Related
Jan 12, 2011
So our situation is pretty simple. We have 3 tables.
A, B and C
the model is A->>B->>C
Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> Bhave many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
It is typical that we qualifier A on either account or user or both. There are indexes (non partitioned on these) We have a problem with now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard.
We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualified by that. We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
View 3 Replies
View Related
Sep 27, 2013
I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery
SELECT
g.COLUMN1,
g.COLUMN2,
e.COLUMN3,
g.COLUMN4,
MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1,
min(to_char(date3,'dd-mm-yyyy'))
[code]....
View 5 Replies
View Related
Mar 11, 2013
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
View 2 Replies
View Related
Jul 30, 2010
SELECT department_id
FROM (SELECT department_id
FROM employees
UNION
SELECT department_id
FROM employees_old )
WHERE department_id=100;
[code]....
The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
[code]....
Is there any performance impact you find in those above two sqls if you compare?
View 14 Replies
View Related
Nov 13, 2012
The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
Attached is the query and plan.
View 9 Replies
View Related
Aug 5, 2010
I have to create a hash partition on fact tables.. we can use temp tablespace or permanent tablespace.
View 10 Replies
View Related
Dec 9, 2010
I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?
The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.
View 1 Replies
View Related
Jan 25, 2013
We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.
We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.
I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?
View 6 Replies
View Related
Feb 4, 2005
16:28:32 SQL> create bitmap index bp_idx_ag_id on transactions(type);
create bitmap index bp_idx_ag_id on transactions(type)
*
ERROR at line 1:ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables
how to create bitmap index on partitiioned tables
View 3 Replies
View Related
Feb 4, 2011
How to find the tables in the database on which high DMLs are firing.
View 5 Replies
View Related
Feb 15, 2011
I am trying to update a million rows in one table with the values from another tables.
Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)
The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.
The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below
The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.
View 1 Replies
View Related
Feb 4, 2013
I need to prepare script to move all objects from one tbs to another tbs. Should I move all the objects individually using "alter table" Command. I got all the objects information using "DBA_SEGMENTS" view.
I have more number of tables,indexes in that tablespaces.
I can not use exp for tablespace backup.
View 4 Replies
View Related
Oct 30, 2012
We need to move schema from one tablespace to another tablespace.
we are going to use following method:
alter table <tab> move tablespace <new_tbsp>;
my question is do we need to disbale primary keys and foreign keys before move and enable them at end.
View 4 Replies
View Related
Apr 5, 2012
Create small functional indexes for special cases in very large tables.
When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.
Example
create index vh_tst_decode_ind_if1 on vh_tst_decode_ind
(decode(S,'I','I',null),style)
It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.
create index vh_tst_decode_ind_if3 on vh_tst_decode_ind
(decode(S,'I','I',null),
decode(S,'I',style,null)
)
To access the record can by like:
SQL> select --+ index(vh_tst_decode_ind_if3)
2 style ,count(*)
3 from vh_tst_decode_ind
4 where
5 decode(S,'I','I',null)='I'
6 group by style
7 ;
[code]....
View 2 Replies
View Related
Jul 25, 2012
One of our customer have problem with following sql statement:
SELECT c.table_name, c.column_name
FROM user_tab_columns c, user_tables t
WHERE c.table_name = t.table_name
AND c.data_type IN ('CLOB', 'BLOB');
During execution it takes all the TEMP tablespace size(8GB).
I gather system stats (dbms_stats.gather_dictionary_stats(estimate_percent=>null)) but it doesn't resolve problem.Above sql statement works fine with RULE hint but I want to know what is the reason of problem with temporary tablespace.
View 10 Replies
View Related
Mar 23, 2013
I have problem with moving old DB to the new (the same DB 10.2.0 in Win 2003, first in 32 bit, second in 64 bit). I want move DB from 32 to 64 bit. Problem is that all objects in old DB were created in SYSTEM schema by SYS. I can't export that objets (with data) because impdp nor imp don't touch this objects (tables with indexes). I can't use export import procedure. I'm looking for another method to transfer data, which will be the best and the fastest? Maybe files copy on OS? I suppose it will be problems with configuration files, database have other tablespaces.
View 13 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related