I have an understanding that the code written in Forms 6i will be executed by the oracle forms 6i run time on the client machine whether it be a sql/plsql .
e.g. i am using the following query in the procedure( of forms 6i interface ).
insert into emp_remote
select ename from emp@db1;
so where the above query would be executed , on client machine or server named db1.
Can I directly upgrade Oracle 9.2.0.1.0 From RHEL-4 platform to Oracle 11g Rel 2 on RHEL -5 platform?I have the export dump and cold backup of Oracle 9.2.0.1.0 database.
I'm trying to migrate our forms application from a Linux platform to Windows 7. We have a common pll which is attached to all forms and unfortunately relies heavily on globals.
Attempting to run the application with the standard library attached to the login form has the effect that the globals are lost between the login form and subsequent forms. If I incorporate the library code directly into the login form, it functions correctly. If I use the debugger to track where the clearing occurs, it functions correctly.
Migration of oracle forms 10g 32 bit windows platform to 11g 64bit Linux platform . I have about 500 fmb on my 10g environment which are currently working fine but when i go to compile it on 11g an error file is generated. All my path is set in proper manner .
I have a object library which has multiple tabs and every tab has its own object. While compiling, the forms are not inheriting the object values only few values from the object are considered.
I have a set of code being perform in the package body of a program unit I want to debug step by step and see what values are being stored during executing or running the form
/* Without HP and deposit, Down Payment(GST) = Total(GST) */ if nvl(v_deposit, 0) = 0 and nvl(v_loan_amt, 0) = 0 then v_gst_fr_dps := 0; v_gst_hp := 0; v_gst_bal_pay := v_total_gst; [code].........
Like in the above example I want to see the value in v_gst_fr_dps,v_gst_hp,v_gst_bal_pay etc when executing and display in the screen.
Yesterday i got wait event when executed simple select from table.This select was like:
SELECT emp_number from employer where subs_id = 111
I got one row, select is very fast.In our Core Bank System we have package with function which returns such information. I tested this select on test DB, and nothing wrong. But when I executed such select and package on Production DB, DB Admin saw that 88 sessions waits when my session release the resource. But what can happen, it was simple select? I used PL/SQL developer to get information from table:
1) SELECT emp_number from employer where subs_id = 111 then 2) Package with this function
Another users used Oracle Forms screen to execute package. How simple select statement could stop all DB?
BANNER 1Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi 2PL/SQL Release 10.2.0.5.0 - Production 3CORE 10.2.0.5.0 Production 4TNS for 64-bit Windows: Version 10.2.0.5.0 - Production 5NLSRTL Version 10.2.0.5.0 - Production [code]...
Forgot to say that after succeful execution on Prod DB I disconnected, and in EM my session was INACTIVE.
We are migrating our servers from Solaris 9 to Solaris 10.In the process we are migrating our databases from Solaris 9 to Solaris 10. DO i find any migration guides supporting the same.My plans/Options for Migration are.
1)Using Datapump 2)Transportable Table Space 3)Using RMAN
We have a 10.2.0.4 database running on HPUX platform which uses EMC for it's storage. We want to migrate the database to an AIX server. Is it possible to just clone the luns where the datafiles live using the EMC utilities and then bring up a database on the AIX server since both platform use big endian for the files?
In my present env, Oracle runs in Solaris 10 and I am planning to restore it to Linux5. I have read thro oracle docs, metalink that cross platform restore with different endian can be done by TTS. But it says that tablespace shud be in read-only until we plug in it in the dest server.
I have come across one odd situation where by i have got 2 .dmp files to restore at 10g win 2k3 sp2 32bit platform.the problem is that there is no information available related to username/password or any other info related to database.there is no information of from username as it require for importis there anyway to restore it?
We got a request from Customer to migrate a RAC database of size 1.8TB from HP unix raw file system to AIX ASM with a min downtime. I have seen lot of methods of doing cross plat form migration and non-ASM to ASM but not together.
Do we have any proven method for migration such cross platform migration with raw file system to ASM conversion in a single go with min downtime?
Is there a way to check which versions or oracle installed on the machine from command line?
The thing is I don't want to login to the database so I can't use SQL Plus also, till now I ran tnsping and checked the output but in case there are several versions of different platform I got only one version that was first on the path env.
As I understand we can't clone database across platform using Rman, as Rman backup taken on say Unix is useless to be recovered on Windows. Though I think of Export and Import, will copying the datafiles, logfiles, controlfile will serve purpose for cloning across platform?
Note that I do not mean Transportable tablespace here. Only copying cold backup across platform
I am trying to install grid infrastructure on oracle linux 6.3 at Virtual box.I am getting error Running a 64-bit JVM is not supported on this platform while verifying cluster trough cluvfy
runcluvfy.sh stage -pre crsinst -n host1 -verbose Running a 64-bit JVM is not supported on this platform.
I have checked java version is 64 bit installed on machine
[root@rac1 grid]# java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.1) (rhel-1.45.1.11.1.el6-x86_64) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
We are planning to migrate our DB to a different platform. Both platforms are of BIG endian format. From googling , I came across the following link [URL]
In this IBM document, they are migrating from Solaris 5.9 (SPARC) to AIX 6 . Both are of BIG endian format.Since they both are of same endian format can't they use TRANSPORTABLE DATABASE ? Why are they using RMAN COVERT DATAFILE (Transportable tablespace ) ?
We have a 2 node RAC installation on IBM AIX 5.3 with Oracle Standard Edition. We need to create a manual standby database (becoz Standard Edition does not allow Dataguard confign). Can we do this on a different platform (eg: Linux/Windows). This is to make use of existing servers for the standby environment.
I did import from 9i to 11gr2 , 1. i create 11gr2 DB , 2.created tablespace with 8kb block, 3 imported 9i dump to 11gr2 DB.
Now iam getting SOME ERRORS: In IMP LOG
1. ORA-29339: tablespace block size 4096 does not match configured block sizes == for all the tablespaces.(But i create TBS with 8kb block before IMPORT)
2. ORA-23327: imported deferred rpc data does not match platform of importing db
I'm planning to upgrade a small database (~150GB) from 10.2.0.3 on windows 2003 23bit to 11.2.0.3 RAC on Linux 5.8.The database contains oracle spatial too. A suitable method and link to document to be followed.
Need some input from you on ASM Configuration for Windows XP platform.
1)First I executed the script as below: <orahome>inlocalconfig add
2) And I had created folder like below manually. c:asmdisksasmdisk1
3) And now I am firing the below command: ASMTOOL -create c:asmdisksasmdisk1 250
At this stage I am getting the below error. ====================================== O/S-Error: <OS 5> Access is Denied ======================================
we are executing a load activity every day through .NET Application, we taking a time solt for Database to ensure nobody is using at that time.But the AWR reports showing different issues on different days.
When i am trying to execute the below in sql. i am getting the error.
create or replace type sum_n as object ( nodes node_d, constructor function sum_n return self as result, member procedure do_s (m date,exd varchar) ); /
LINE/COL ERROR -------- ----------------------------------------------------------------- 0/0 PL/SQL: Compilation unit analysis terminated 2/9 PLS-00201: identifier 'NODE_d' must be declared
I have a trigger that is called from an update on the table, this trigger performs the procedure and this procedure update the Same record in the table That shot the trigger. this situation returns error ORA-0060 - DEADLOCK DETECTED WHILE WAITING FOR RESOURCE. Is there any way that this works?
In our production environment some SP's are executing longer duration, but when same SP is executed from PLSQL Developer client it is executing vary quickly.
I have question in procedure execution and function execution oracle database. I want know that which is faster in execution procedure or function.
how can i prove it through examples. can i see the explain plan for a procedure and a function or is there any way to prove which one is faster in execution.
We have the following case: an application modifies a table in an Oracle db (10.2.0.3.0).
Unfortunately the update SQL statements from the application always use the condition "where Column1 = 'some given value'" which is wrong (never mind why).
It should be instead "where Column1 = 'some value' and Column2 = 'val for Column2'. The 'val for Column2 will be taken from the very SQL query being issued (we can make the application do an update for Column2 even if the value in it never changes).
So all the update queries from the application look at the moment like that:
"update my_table set Column2 = 'val for Column2', Column3 = 'some other values', Column4 = 'some other value' where Column1 = 'some given value'".
We would like to capture them and somehow on the fly modify them to look like that:
"update my_table set Column2 = 'val for Column2', Column3 = 'some other values', Column4 = 'some other value' where Column1 = 'some given value' and Column2 = 'val for Column2'".
Can a trigger "before update" do it? For some reason we cannot at the moment ask the vendor to change the hard code of the application so we are looking for a temporary workaround.
I have a very big oracle procedure. Since it's too big and calling many other procedures, I am not able to debug the exceptions thrown. Any oracle utility which logs all the procedures called by the master procedure step by step and maintains a detailed record.