SQL & PL/SQL :: Interface Table Compared With Normal Table - Result Dumped If Match Found
Aug 17, 2012
Oracle 10g, Windows XP
There is an interface table and there is an normal transcational table..interface table is being compared with normal table and if match found the result is dumped into another normal table.
I am using two cursors one is to query the interface table and in a for loop pass the results to the second cursor..The interface table is having 5000 + rows and the transcation table is having more than 3.7 millions ..and the program is taking lots of time to execute..took almost 35-45 minutes..
I've got a oracle install [non production, but devel] that is a tad screwed up. We moved the box and as a result changed the hostname to match the new naming scheme. Ever since then OracleEM has been somewhat confused. In anycase, I don't want OEM anyways now. Plan is to learn SQLplus.
That being said I've used emctl to shut down dbconsole, but it seems there is something somewhere that keeps restarting 2 processes that like to sit around and take up 100% cpu. I can kill them, they stay dead for a few hours then crop up again.I was able to find this out about them:
And then this, which caused me to conlucde its OracleEM:
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.process in (20334,20336) [code]...
I have a table of Costs. We have Jobs that run and there will be a cost associated with a particular machine.So JobNo 1 may run on Machine A and have a cost of 50 dollars. Although its now shown below JobNO 1 could run on MachineB and so on.
We have operators (PERSONCODE) run the jobs on the machines. So Job 1 may be run by PERSONCODE 8 (e.g. Tony) and it may run on MachineA or MachineB.Multipe people may run a particular job. The PERSONCODE will be unique to the Job and it is actually unique to the list. A person never works on more than one job.
i'm new to oracle environment.how can i specify NONCLUSTERD INDEX on Primary cloumn during table creation.By default it will create clusterd index but i need non-clusterd index on it.
I'm using following stmt to create normal primary constarint during table creation,
CONSTRAINT PKFORM_PROPS PRIMARY KEY (FORM_PROPS_PK) USING INDEX TABLESPACE DB123_INDEX
how can i change the above query, so that it should create NONCLUSTERED INDEX on Primary key column.
I have employee interface table something like this.
emp_idemp_name Job_title supervisor_name 1AJ Engineer BJ 2CK Analyst ND 3BJ Manager TR 5TR VP IT JD 6ND S Manager MD 7MD VP Telecom SK 8SK VP Eng JR
I want to idenitfy the VP for each employee. The logic I have to apply is check for hte supervisor of each employee to see if the supervisor has designation starting with 'VP'. If no, I have check the supervisor of the supervisor and so on. I tried using a recursive query using connect_by_root but in the above example for employee ND it lists the VP as both MD and SK. I need it to show on MD who is the lower in the hierarchy.
I am a Java person but since my app uses the Oracle DB I am to do this task.
I'm trying to create a function that simply returns the current database name (e.g: select db_unique_name FROM v$database ) from a function but when I compile it comes up with :
Error(9,5): PL/SQL: SQL Statement ignored Error(9,44): PL/SQL: ORA-00942: table or view does not exist
I am compiling and running this as the SYSTEM user and I do think that I need to set privledges/roles, etc to allow this (since I have read that using synonyms in functions/procedures requires permissions...but I cannot seem to find anything that tells me exactly what role/priveledge I need to grant/allow to let this happen.
me in building a query. I want to show the result of the query just like pivot table. Test case CREATE TABLE CPF_YEAR_PAYCODE ( CPF_NO NUMBER(5), INC_DATE DATE, PAYCODE_TYPE CHAR(1 BYTE),
[code]...
I want that my query should look like the format as attached in the xls sheet.
I am getting different result when I run my dashboard procedure I am using temporary table with "ON COMMIT PRESERVE ROWS", below is the information I am running my attendance dashboard procedure which will display the employee attendance status based IN and OUT punches the status like AA-full day absent, GG-Full day Present, AG-First half absent,GA-Second half absent.
Now when I run the first time my procedure for first time I am getting status AA even though IN and OUT timings are correct and if run it again then it is displaying the status for same employee as GG I didn't understand the problem where it is effecting the status
Not able to understand what's wrong with the code. I am trying to import data to a table using a CSV file. I have exported the data (CSV) from the interactive report and I am just trying to insert the same data to the table, through a process. When, I tried to do so; its throwing an error message saying NO_DATA_FOUND and file is not getting inserted into wwv_flow_files table.
But when I removed the data from the CSV file for the comments field and then tried importing the file, the process worked. I don't understand whats the problem with the code.
I have a sample app setup in my workspace for this weird problem.
[URL]
Workspace details:
CSV file with comments field and data in it - when trying to import - throws an error message NO_DATA_FOUND
CSV file with comments field and without data in it - tried importing - this worked
I thought this was the easy bit in APEX when you just create a form based on a table, with some validations etc. and use it to insert,update data. However on inserting the first record, I get the following error:
is_internal_error: false ora_sqlcode: 100 ora_sqlerrm: ORA-01403: no data found
[Code]....
The form is based on a table with a primary key and the primary key is populated from an APEX-generated sequence.
I tried recreating the form, but still no good and now I get the no data error even when clicking "RUN" at page level, so the page does not even display.
Following are 2 queries, which return same results as far as the input parameter is NOT NULL
select /*+ gather_plan_statistics */ * from PX_CJQ where decode(:x,null,1,object_id) = NVL(:x,object_id); select /*+ gather_plan_statistics */ * from PX_CJQ where object_id = NVL(:x,object_id);
However the execution plan differs a lot The FTS cost or rows accessed count also varies what could be the reason? The PX_CJQ is simply select * from dba_objects and has index on object_id which anyway is not being used in this case
variable x number exec :x:= 28 Query - 1 *************** *************** SQL> >select /*+ gather_plan_statistics */ * from PX_CJQ where decode(:x,null,1,object_id) = NVL(:x,object_id); SQL> >select * from table(dbms_xplan.display_cursor(NULL,NULL,'ALLSTATS LAST'));
[code]...
Query - 2 *************** *************** SQL> >select /*+ gather_plan_statistics */ * from PX_CJQ where object_id = NVL(:x,object_id); SQL> >select * from table(dbms_xplan.display_cursor(NULL,NULL,'ALLSTATS LAST')); PLAN_TABLE_OUTPUT
I am unable to insert the result set of query into corresponding SQL Table type variable where as same functionality can be accomplished by PL/SQL table type variable. Can't we access the same by using SQL type variable?
Ex:
Step 1:
SQL Object type , Table type Objects creation : drop type sql_emp_tab_type ; drop type sql_emp_type ; create or replace type sql_emp_type as object ( empno number, ename varchar2(20),
[code]...
Step 2: Accessing the table type object from PL/SQL block
I am trying to build a report.My query is working fine when i take out this report for a single area_code.But it is not showing proper result when report is take for all are_code's available in table.I have used two tables transactions and balance
create table transactions ( glcode varchar2(10), area_code varchar2(10), debit number,credit number ); insert into transactions values(2000,'ap',200,200); insert into transactions values(3000,'ap',222,222); insert into transactions values(4000,'ap',123,123); insert into transactions values(2000,'dp',200,200); insert into transactions values(3000,'dp',222,222); insert into transactions values(4000,'dp',123,123); insert into transactions values(2000,'pp',200,200); insert into transactions values(3000,'pp',222,222); insert into transactions values(4000,'pp',123,123); [code]....
I have two tables T1 and T2. T1 is the original backup snapshot for changed records from overnight batch in a big table and T2 is the overnight batch changed records. Both tables have similar number of rows (T2 might have more for newly inserted rows) and you can find out the differences by comparing these two according to action column in T2 (C - Update, A - Insert and D - Delete)
how to compare these two tables to generate something like the following. I can join these two tables to generate the diff but it is one row per account.
client_nbr branch_cd, account_cd, action column, old_value, new_value 8888 123 45678 C account_clsfn_cd 004 005 8888 123 45678 C buy_cd 98 99 8888 012 34546 A sell_cd 12 8888 321 98765 D dividend_cd 1
I am using Oracle 10g so Unpivot cannot be used.
CREATE TABLE T1 ( CLIENT_NBR CHAR(4 BYTE) NOT NULL, BRANCH_CD CHAR(3 BYTE) NOT NULL, ACCOUNT_CD CHAR(5 BYTE) NOT NULL, ACCOUNT_CLSFN_CD CHAR(3 BYTE), SELL_CD CHAR(2 BYTE), BUY_CD CHAR(2 BYTE),
Recently we have upgraded to 11g(11.2.0.3) from 10.2.0.2 on AIX 5.1 enviornment. After upgrade, it has been noticed that CPU utilization is more while executing a program.
CPU is crossing 85% in 11g, in 10g it is 60% during the program execution. I have gone through the SQLs which are getting executed, elapsed time for all SQLs are less than in 10g. i.e. elaspsed for all SQLs in 11 is very less. in AWR report, DB CPU stood in the top.
I have SQL Server database I would like to migrate into Oracle. The database supports a large application. It is around 10GB. I requested a new instance but was advised I would have to pay for that but if I asked for a new Schema it could go in our current company instance. I am fine with that since it wont cost more money if I just add a new schema to our Company Oracle instance. Just curious what is the advantage of getting a new instance compared to creating a new schema for 10 GB of data?I assume the advantage of creating a new instance is our Schema (in new instance) and work will have its own space/house and can grow in size without any issues?
Name Null Type --------------------------- -------- ------------- RPTNO NOT NULL NUMBER RPTDATE NOT NULL DATE RPTD_BY NOT NULL VARCHAR2(25) PRODUCT_ID NOT NULL NUMBER
describe rptbody
Name Null Type ------------- -------- ------------- RPTNO NOT NULL NUMBER LINENO NOT NULL NUMBER COMMENTS VARCHAR2(240) UPD_DATE DATE
The fact is that we store some header in RPTHEAD and store real data in RPTBODY, the question is that if I use below SQL to query all data for a 'PRODUCT_ID'.
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE FROM RPTBODY t0 , RPTHEAD rpthead WHERE ( t0.RPTNO = rpthead.RPTNO AND t0.UPD_DATE>=to_date('1970/01/01 00:00:00','YYYY/MM/DD hh24:mi:ss') AND rpthead.PRODUCT_ID IN ('4647') )
I do not want to have 'ORDER by' clause since data set is too large, the sorting takes long time, is there any way to get the result rows in the order sorted by RPTNO? We have the index for RPTNO on RPTBODY.
I am using toad to output the details of the database. (I need these details in a format which i can then import into excel) However, When I run sporadic individual records through my query, the results come back correct and as expected. But when I run the full data set through my query a lot of the records have different values than what they do when run individually!
Ive spent all day on this script and its really a hack of various other cursors and scripts brought together. Its driving me mad as my code doesn't seem to be wrong (since the results are correct when a single record is run) yet there must be something wrong as my full data set does not return correctly!
I have an Image Type on a forum page. I want a default "not-found" image to display if the BLOB column value is null or if there is no data for that search value. The image is stored with the app: #APP_IMAGES#not-found.png
Primary RAC and Standby RAC databases are using version 10.2.0.4. They are configured & controlled via Data Guard Broker. Currently, the observer runs from one of the Standby nodes. All are on Solaris x86 64bit OS version 10.
Now, I have to move Observer to 3rd DC and I am looking If I could use another OS for observer instead of same as DB OS version. Why? Our new infrastructure is on RHEL Linux, therefore, I would like to deploy observer once and then wait for existing databases to be upgraded and moved to RHEL. Also, i understand that database and observer software should be of same version e.g. 10g, 11g.
My question is what if i install & configure observer on RHEL ? The end result would be:
1. Primary DB: Solaris 10, Oracle Software binaries version 10.2.0.4 2. Secondary/Standby DB: Solaris 10, Oracle Software binaries version 10.2.0.4 3. Observer: RHEL 5.8, Oracle Software binaries version 10.2.0.4
Is it accepted mix configuration say e.g. to run for six months? Also, it would be grateful to know if there any license implications when running observer on a different node running no databases at all.