SQL Script Validation For Use In Dataguard Environment?
Feb 21, 2013
We have a 10g environment with DataGuard running for reporting purposes. Our application team is getting ready to upgrade their application and it requires that several SQL Scripts be run to perform the primary database updates. A couple of them are over 1 MB in length. My concern is that these scripts will break DataGuard. I am not a DBA by any stretch, but I seem to remember there are certain limitations with DataGuard.
I have reached back to the team and asked them to have the Vendor confirm these scripts are validated for a DataGuard environment and I have also reached out to the System DBAs for review. I am wondering if there is an utiility available that can check the SQL to confirm if there are any DDL or DML statements that will not work in DataGuard. Likewise, we are going to run these scripts in a non-production DataGuard enabled environment first, but I want to have some level of confidence, that this will be a success.
We have a non-production Oracle 10g Cluster running on Linux, with DataGuard (logical standby). From time to time, we need to refresh the schema on primary, but to do so as always caused problems with the logical standby. Our DBAs can never get it to complete successfully. They have tried a bunch of different methods (even provided from Oracle), but it does not work. e have a bunch of skip statements on the
Everytime, we need to refresh the schema, we have to build the entire database (primary and logical standby) from production RMAN backup. As you can imagine this is a very time consuming ordeal. There has got to be a way this process can be completed in a timely manner.
I was thinking of the following.... 1) shut down dataguard, log shipping 2) lock user, kill sessions, drop user from primary 3) lock user, kill sessions, drop user from logical standby 4) run impdb on production export file on primary 5) run impdb on production export file on logical standby 6) re-enable dataguard, log shipping 7) confirm logs being applied, databases in sync
We are only replicating the one schema to the logical standby. I am not clear on how redo logs would be applied to the logical standby. There are hundreds of them @ 100 mb each, so I would think if we do this independently, I could somehow sync primary and logical standby after the imports complete.
I'm trying to move my backup sets from windows database environment, to OEL 5.7 environment on another server.
I've found a manual [URL] by which I am trying to do it.I took backup sets from last night's backup using RMAN,and the current parameter(initSID.ora) file from the running live database.Now i need to configure control files in the pfile accordingly.
1. can i take current control files from the running system, to restore and recover backup sets from last night, to the state the database was at backup time?
2. how can i find out if control files are backed up and know by RMAN? "list backup completed after '2012-JUN-19';" >> gives me Archive redo logs, datafiles, but don't see the control files(or don't reconize them).
/home/oracle/bin/check if [ $# -lt 1 ]; then echo "usage: check <oracle_sid>" echo exit fi
[Code]...
This is the output it gives on the primary:
INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PARTHREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 prod xxxx 11.2.0.3.0 29-NOV-12 OPEN NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
[Code]....
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
This is the output it gives on the standby:
INSTANCE_NUMBER INSTANCE_NAME --------------- ---------------- HOST_NAME ---------------------------------------------------------------- VERSION STARTUP_T STATUS PARTHREAD# ARCHIVE LOG_SWITCH_WAIT ----------------- --------- ------------ --- ---------- ------- --------------- LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO ---------- --- ----------------- ------------------ --------- --- 1 prod_stby xxxx 11.2.0.3.0 29-NOV-12 MOUNTED NO 1 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
[Code]....
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
I've got a physical standby database and I'm now licensed for Active DataGuard. I would like to automate the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION command to be executed right after the STARTUP command so that whenever the database is started by our DBSTART script, it activates READ ONLY W/APPLY immediately after start without manual intervention. This is probably simple, but I figured I'd ask before starting to pore through documentation.....!
I know the uses of ORAPWD utility and REMOTE_LOGIN_PASSWORDFILE initialization parameter.I wish to know that, what will be the primary purpose of using ORAPWD passwordfile in the following situations.
I have a dataguard setup, a primary and a standby.my instance is testdb.i have set all necessary dataguard parameters and everything seemed to work well initially with logs being moved from primary to secondary and being applied there.but all of a sudden i see that the primary pushes logs to the standby and at the standby end these logs are available but not applied.
when i try to execute the: alter database recover managed standby database cancel; it hangs for a long time and never returns to sql prompt.
i bascially want to stop and restart the MRP process to apply the logs but cannot cancel the managed recover mode.Is it advisable to kill the mrp process at os level using "kill -9 "?
I try to auto startup my standby database after reebot machine. I used trigger " on startup" , srvctl but still it doesn't work..Database 11g , operating system RedHat 5 or HP-UX.
I have two Oracle 10.2.0.4 databases, on two SUN M5000 (Solaris 5.10). One is primary (EXP1), the other is the standby (EXPBKP). Both SID is 'EXP'. If I do a manual recover (after I sync'ed primary archive logs from primary to standby), it works. If I do a manual log switch, logs are shipped AND applied on the standby. But after a while, I can see some errors on primary alert log , and logs are not shipped anymore :
CODE Errors in file /applications/oracle/admin/EXP/bdump/exp_arc2_1277.trc: ORA-03135: connection lost contact Tue Oct 26 08:30:14 2010 FAL[server, ARC3]: FAL archive failed, see trace file. Tue Oct 26 08:30:14 2010 Errors in file /applications/oracle/admin/EXP/bdump/exp_arc3_1279.trc: ORA-16055: FAL request rejected ARCH: FAL archive failed. Archiver continuing [code]....
I have setup 11g Dataguard - physical standby. Everything seems to be fine. I have also tested switchover and failover using DG broker which seems to work fine.All, I would like to know is abt ORA-01109 during switchover..
DGMGRL> SWITCHOVER TO TEST_stby; Performing switchover NOW, please wait... New primary database "TEST_stby" is opening... Operation requires shutdown of instance "TEST1" on database "TEST_prim" Shutting down instance "TEST1"... ORA-01109: database not open [code].....
This ORA error occurs when it tries to shut down primary. When primary is in open state, why its saying that database not open. Operation requires shutdown of instance "TEST1" on database "TEST_prim"..Shutting down instance "TEST1"..
1) The Primary Database is UP. The Physical Stand By Database is DOWN. The Current Archive Log Sequence is 99 in Primary.
We have to apply Archive Log from 51 to 99 to the Standby Database. But Unfortunately, there is no backup of those Archivelogs and the ArchiveLogs from 51 to 98 have got deleted at Primary end.
Now how will you apply these Archive Logs from Primary Database to Physical Standby Database?
We have to configure Dataguard for our 24x7 critical banking 2 Node RAC database(10.2.0.4). Before proceeding with the configuration we have to make sure what steps we should follow to have minimum or no downtime.
1) Document covering DG setup in RAC environment. 2) We have to perform switchover as well so need its steps too. Either the normal switchover steps would be used or have to stop/start rac services as well.
In DR site. i have to clone the database from pshysical standby database and clone db will be normal db not standby. Is it possible to do rman duplicate from active dataguard? will it support the duplicate database from active database or i have to take the rman backup of standby database and duplicate from backup piece.
I have data stored in the format 01.01 , 01.02,02.03 in one field of table , i dont want my user to deviate from this coding by not typing like 0001.01 or anything othen than above format , how can i prevent this.
I have two date fields in my form one start date and the other is end date.I have changed their properties to datetime and set the initial value $$dbdatetime$$ and format mask to dd/mm/rrrr 24HH:MM , what i want is
1.i want to display the A.M and P.M along with that. 2.If the time and date of end dt is lesser than start error must come. 3.If the user enters the P.M during the day time before noon then it should give error message.
I had used a When-Validate-Item trigger against an text item.
Also i had a push button (EXIT) for Exiting from the Form with code : Exit_Form(no_validation);
But if an invalid text is entered in the Text-Item and after that if i use the EXIT push Button then it doesnot exit from the Form till a valid text is entered inside the Text-Item.
how to Exit from Form even if the Text-Item has Invalid Value.
In our company we have a requirement to export data out of a live dbs and import that into a test db for issue replication,debugging & fixing from time to time.
As a junior DBA am resposible to import a dump file provided to me into a test DB. The export is taken off by customer dbas or consultants for a specified list of tables/schemas etc.
I have readonly access to source database and the issue I am facing is after the import there are certain objects that get missed out or do not get imported properly.
I would like to know if there was a script I can use to generate a report on the source DB and then run the same on the target DB after the import and compare to find out if any objects got missed out.
create TABLE pan_number(pan CHAR(10) NOT NULL); INSERT INTO pan_number(pan) VALUES('ABCDE1234A'); INSERT INTO pan_number(pan) VALUES('FGHIG5678F'); INSERT INTO pan_number(pan) VALUES('ABCDE12345'); INSERT INTO pan_number(pan) VALUES('ABCD1234A'); select * from pan_number;
now i need to validate valid PAN number which is of " The first 5 letters should be alphabets & last letter as alphabet & total length of PAN no. should be 10 digit"
i need to display valid PAN number , only first two rows are valid PAN numbers.
function isnumeric( p_string in varchar2) return boolean as l_number number; begin l_number := p_string; return TRUE; exception when others then return FALSE; end;
It is not recognizing function IS NUMERIC. Is there any way, I can do this through External Table.
I'm getting error like
The following error has occurred:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "identifier": expecting one of: "comma, date_format, defaultif, enclosed, (, ltrim, lrtrim, ldrtrim, notrim, nullif, optionally, ), rtrim, terminated" KUP-01008: the bad identifier was: isnumeric KUP-01007: at line 9 column 16 ORA-06512: at "SYS.ORACLE_LOADER", line 14 ORA-06512: at line 1
Insert into TEST(CREDIT_BALANCE, AWARDS_BALANCE) Values ('1678.09', '1678.9'); Insert into TEST(CREDIT_BALANCE, AWARDS_BALANCE)
[Code]....
From valid records from test table those two columns,need to be insert into test1 table. We should validate the decimal values while selecting from test table.
I have a time field which i need to validate for military time. for example like e.g.1630. Time entered outside of military time range should not be permitted e.g. 1364, 2510.
1. Below TXN_LOG table Expected to have 0.5 to 1 million records & TXN_LOG_HIST table will have max 3 million records.
2. On daily basis system will move records from TXN_LOG to TXN_LOG_HIST.
3. The below mentioned check will be performed for each transaction, With out this check currently my process is doing 10tps. (transaction per second).
The below validation we are currently using which is overhead for our transactions processing. Also the below mentioned check will be performed for each transaction, With out this check currently my process is doing 10tps. (transaction per second).
BEGIN SELECT COUNT(1) INTO l_exists FROMTXN_LOG WHERE AC_NO = p_acho TXN_INIT_DATE = p_txn_date AMOUNT = p_amt
[code]....
Is there any fastest method to check above record exists validation from the both the table?
I have one column Value his Datatype is varchar2 and I want to validate it with Type column his values are(Char,Date,Number). If Column Type have value Number then column value should accept number input only. How we 'll achieve this.