I inherited the management of an Oracle 10.2.0.3 database Standard Edition on a 32 bit Microsoft Windows Server 2003 R2 platform with some invalided objects related to SYSMAN schema.
I suppose that probably these invalid objects are derived from a bad **previous upgrading operation*(probably from 10.2.0.1 to 10.2.0.3).Furthermore I looked for a solution on the net and I found that this trouble could be issued dropping and recreating again the Database Control Repository using emca plus specific commands. [URL]On Windows Systems I followed these commands on my test database and the repository was successfully created and all the previous invalidated objects where corrected.
However there is still a view not functioning. I tried to compile it again after the creation of the new Database Control Repository with the command alter view <viewname> compile but it returns the following errors:
Progetto: sqldev.temp:/IdeConnections%23jhoray01XDB.jpr Errore(7,3): PL/SQL: Statement ignored Errore(7,19): PLS-00201: identifier 'DBMS_CRYPTO' must be declared
I created an encrypted tablespace for testing. I later dropped it but don't remember if I specified "including contents and datafiles". The tablespace was empty and there are no datafiles for it. However, the information for this dropped tablespace still shows up in v$encrypted_tablespaces. How do I get that lingering information removed?
We have over 200 different servers that are running Oracle and every 2 months, we have our passwords expire. Instead of having the DBA's go into every server to sync the passwords I would like to have some sort of way of pushing the encrypted password in the Oracle DB to other databases.
Two things to keep in mind.
1) Every user may not be in every DB so if the user does not exist, the code should not try to update that users password 2) I have all my DB's in a tnsnames.ora file or I can put into an easy to parse file, so I can connect to every DB.
for creating a db link, How to get the encrypted passwordhere is an example
create public database link "TEST1.UNIX.190.ORG" connect to "scott" identified by values '053E6879854B7744F64396350297E1D6EF191163AE35216E64' using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.30.20.20)(PORT=1521))(CONNECT_D ATA=(SID=SID1)))';
where or how to get the encrypted password(Pink colored above) to create db link.
Using Golden Gate to replicate a database (Encrypted Tablespace, Oracle 11.2.0.1, Windows 2008) to a different database server (No Encrypted Tablespace Oracle 11.1 Linux)
Following error goldengate report ERROR OGG-01771 DBOPTIONS DECRYPTPASSWORD must be used to decrypt TSE data. Use TRANLOGOPTION IGNORETSERECORDS if you do not need to capture any tables that are in an encrypted tablespace.
How use it
GGSCI> ENCRYPT PASSWORD "shared key" Add an entry to the Extract parameter file to decrpt the new shared password
"All data you create in this tablespace will be encrypted using an AES256 encryption key. You cannot encrypt an existing tablespace. To encrypt data, first create an encrypted tablespace, then use alter table move, CTAS or datapump import to move your data into the encrypted space. Remember to drop the old tablespace BUT not including datafiles. Use an OS schred program to remove the old datafile. If you are on ASM you may use the including datafiles option since you can’t schred files from the OS inside an ASM instance."
But i want to know why we should NOT drop the including datafiles, when dropping tablespace (so 'drop tablespace my_tbs including contents and datafiles'). So what option should we use when dropping tablespace?
Why we should use OS capabilities to remove the datafiles?
What happens if i remove the datafile when i drop the tablespace?
We are exporting from a 9i db to an 11g. During the migration, we are changing our character set from USASCII7 to AL32UTF8 so that the "extended" characters that our users like to put in text fields are stored and retrieved properly.
However, we've found a problem, and i'm not sure if Oracle has a method of dealing with. Searching this site and Oracle docs got me nowhere.
We store account #'s and credit card info in the DB encrypted with the dbms_obsfucation_tookit. We have an encryption key cross reference table that we use to store the key to un encrypt the data.
However, what we've found is that by importing these keys into our new character set database, the keys are no longer valid and can't be used with the DES3DECRYPT function to get the correct numbers out.
Is there a conversion utility or any tool that Oracle provides to maintain the encrypted datas "decrypt ability"? Worse comes to worse, we will have to write a script/procedure to decrypt everything on the 9i, import it to 11g, and then re-encrypt it.
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
We have a primary and a standby (Physical Dataguard) site.
1. How do i check if the database is encrypted or not on primary as well as standby ? 2. If primary and standby are encrypted, does the data that gets replicated from primary to standby also in encrypted form ? If not, does it make sense to encrypt that ?
Our product needs to use SecureMag Encrypted MagStripe Reader to get the credit card info in POS.After adding the code in every module, most of them work well now. But one of them does not. We met the error FRM-41344.This module is called from another(using call_form) that could also have this functionality(it works well in this module).
The error raise in this code. The global variable is assigned a value in each module.I got some info from the internet. But I still get error after I did the following operation.When using Oracle Forms, you might receive this run-time error:
FRM-41344: OLE object not defined for object in current record.
which can occur for either of these reasons: The OLE container has lost the definition of the Oracle Video Custom Control. To fix this problem, go into the Forms Designer, and re-insert the Oracle Video Custom Control by clicking the right mouse button inside the OLE container and choosing Insert Object
The Oracle Video Custom Control has not been initialized. To fix this problem, modify the form so that it can navigate to the block that contains the Oracle Video Custom Control. You can either make this block the first block on the form or add a GO_BLOCK command in the WHEN-NEW-FORM-INSTANCE script to navigate to that block. If necessary, you can add a GO_BLOCK command followed by SYNCHRONIZE before any commands that access the Oracle Video Control. (You can tell if the Oracle Video Control has been initialized because the video control buttons will be visible.)
If i have a table T1 and a table T2. Table T1 is having 100 rows and table T2 is having 20 rows. When performing a Hash join ,which table should be used to make the hash table ,the larger one or the smaller one and why ?IF the data set is too small for considerations then please conser table T1 with 10 million of rows and table T2 with 1 million of rows.
I'm looking for a way to store an encypted numeric value in one field in a table (so that it appears encrypted even to a DBA) and to display the unencypted value in Apex forms and interactive reports for some users but not others.
I see one of my SQL's which is ran by the user on a 10.2.0.3 database changing its SQL_ID after some runs even if the query is not changed a bit! However the HASH VALUE for this query remains the same.
how a same query can have different SQL_ID's but same HASH_VALUE?
Note: Statistics are not modified on the base tables of this query.
I am facing a problem in fetching / updating records from a customer details table having around 20 million records. The table contains around 30 fields with 'MOBILE_NO' as primary key. most of the queries are having 'mobile_no' in where clause .I am planning to hash partition that table using mobile_no column as there is no other column available which can be used for partition.
clarify whether creating hash partition on such key would increase performance of data extraction as I have read on net that hash partitioning is not effective for performance tuning.
I see that one of my queries from an application time is spending most of its time in the hash group by. I'm running Oracle 11g with a quarter rack exadata appliance. Is there a better way to run or design this table? query:
A basic select and group by query I am optimising for my Database course has returned results that indicate it will perform better on a clustered index when returning a smaller number of rows (5% of the largest table) and on a hash clustered index when returning higher volumes (50% and 80%). I understand that it is possible to use more than one index type on a table to improve performance, but I am struggling to understand how I might establish a hash cluster and a cluster on the same table? and then use hints to drive the query down one access path or the other.
create tablespace mssm datafile 'c:appmssm01.dbf' size 100m segment space management manual;
create cluster hash_cluster_4k ( id number(2) ) size 8192 single table hash is id hashkeys 4 tablespace mssm;
-- Created a table in cluster with row size such that only one record fits one block and inserted 5 records each with a distinct key value
CREATE TABLE hash_cluster_tab_8k ( id number(2) , txt1 char(2000), txt2 char(2000), txt3 char(2000) ) CLUSTER hash_cluster_8k( id ); [code]....
If I issue the same query after creating unique index on hash_cluster_tab(id), the execution plan shows hash access and single I/O (cr = 1).Does it mean that to have single I/o in a single table hash cluster, we have to create unique index? Won't it create additional overhead of maintaining an index?
What is the second I/O needed for in case unique index is absent?
I have a existing non partition table with more than 100 million records,planning to re design using Hash partition.This table doesn't has any range column to do range partitioning.
Table has 40 columns with a Primary Key on two columns (guest_sales_Id ,Version Flag). guest_sales_Id is unique for entire table but with anopther column version Flag declared as Primary key.(Version Falg will have only two distinct values in entire table)
If i do hash partition,do i need to declare on two columns which are declared ad Primary key ?If i use only guest_sales_id to declare hash prtition any issues ?
I created the 32 hash partition on a fact table. Based on hash parititon technique it should evenly distribute data accross the different partition.But when i analyze the table and check the distribution its not at all even.