with t1 as
(
select 'eff_date' param_name, 'mb256_type' param_type,'01-01-1970' param_value from dual
union all
select 'disc_date' param_name, 'mb256_type' param_type,'31-12-9999' param_value from dual
union all
select 'initial val' param_name, 'mb256_type' param_type,'30' param_value from dual)
select param_name,param_type,param_value from t1;
desired output:
need output in a row in three different columns
param_value
01-01-1970 31-12-9999 30
I tried below query
SELECT *
FROM (
with t1 as
(
select 'eff_date' param_name, 'mb256_type' param_type,'01-01-1970' param_value from dual
union all
select 'disc_date' param_name, 'mb256_type' param_type,'31-12-9999' param_value from dual
I have a PIVOT XML query that returns results that I want to display as an HTML report from SQLPlus. Is there a way that this can be done readily? I have searched the net an found references to XML Publisher but in my current situation we do not have the product available.
I am having some difficulties with this trigger. It keeps giving me the error "ERROR at line 5: PL/SQL: ORA-00923: FROM keyword not found where expected" when I am not even using a SELECT before the line it says the error is on? Here is the trigger that I am attempting to create.
CREATE OR REPLACE TRIGGER ClassRestraint BEFORE INSERT ON Enrolled FOR EACH ROW DECLARE numCourses NUMBER :=0; myException EXCEPTION; BEGIN [code]...
I have writen PL/SQL packages for data loging through pipe lined function for better peformance.The below packages has been compiled sucessfully but during the run time it shows an error like "ORA-00932: inconsistent datatypes: expected - got -".
CREATE OR REPLACE PACKAGE pkg_mkt_hub_load AS PROCEDURE sp_final_load_mkt_hub; FUNCTION fnc_pipe_tot_lvl_idx_mon_hub (pi_input_cur IN SYS_REFCURSOR) RETURN tot_lvl_idx_mon_tt PIPELINED;
[code]...
SHOW ERRORS
Error:
ERROR at line 1: ORA-00932: inconsistent datatypes: expected - got - ORA-06512: at "GPAIHMKTDTA.PKG_MKT_HUB_LOAD", line 33 ORA-06512: at "GPAIHMKTDTA.PKG_MKT_HUB_LOAD", line 55 ORA-06512: at "GPAIHMKTDTA.PKG_MKT_HUB_LOAD", line 92 ORA-06512: at line 1
types scripts:
create or replace type tot_lvl_idx_mon_ot as object (SSIA_INDEX_ID VARCHAR2(60), start_date date, CURRENCY VARCHAR2(10), LEVEL1 NUMBER(31,11), TYPE VARCHAR2(31) ,
I have this script which should find tablespaces and their size, joined with free bytes. Trying to run this gives me the SQL Error: ORA-00923: FROM keyword not found where expected.
I have two questions:
1. Where should the FROM be?
2. Is there something wrong with the join.
============================================== set linesize 120 col "TOTAL (KB)" format 99999999999999999 col "FREE (KB)" format 9999999999999999 col TSNAME format a35 col "% FREE" format a10;
SELECT a.tablespace_name TSNAME, sum(a.bytes/1024) "TOTAL (KB)", Sum(b.bytes/1024) "FREE (KB)" To_char(round((sum(a.bytes/1024)/sum(a.bytes/1024))*100),2), 'FM99990D999999') || ' % ' "% FREE" FROM dba_data_files a, dba_free_space b Where a.tablespace_name = b.tablespacename Group by a. tablespace_name [/i] =============================================
I used the script from [URL]
It worked great but I'm not sure how to use the arithmetic functions to show me MB instead of bytes.
begin insert into plc_dw_dry_run_fic_rfsh values (424740,'1','','LTML','48000'); insert into plc_dw_dry_run_fic_rfsh values (424736,'1','','LTML','32000'); insert into plc_dw_dry_run_fic_rfsh values (424738,'1','','LTML','128000'); insert into plc_dw_dry_run_fic_rfsh values (424783,'1','','LTML','96000'); insert into plc_dw_dry_run_fic_rfsh values (424789,'2','','LTML','96000'); insert into plc_dw_dry_run_fic_rfsh values (424750,'1',198,'LTML','10000'); insert into plc_dw_dry_run_fic_rfsh values (424760,'1',199,'LDFM','20000'); insert into plc_dw_dry_run_fic_rfsh values (424770,'1','','LTML','192000'); end;
Rules --------- 1) First next_cpn_lvl_id1 should be considered (next_intrvl_code can be anything), if next_cpn_lvl_id1 is 198 or 199 then new_ltr_code should have 1a.
2) If next_cpn_lvl_id1 is not 198 or 199, then next_intrvl_code should be considered
a) if next_intrvl_code is LTML and mod of next_intrvl_value and 96000 is zero then new_ltr_code should be 1a b) if next_intrvl_code is LTML and mod of next_intrvl_value and 48000 is zero then new_ltr_code should be 1b
3) If 1 & 2 are not satisfied, ltr_code should be assigned to new_ltr_code.
I am trying to compile this block for updating a record. In the P_ADD_LOV_SQL column, I have to update the following select statment, but when ever I am compiling it it shows error in the Select statement as : ORA-00923: FROM keyword not found where expected. rearrange the select statement so that it doesn't show the error.The coding is :
I am fetching more than one column data from a table repetedly(By different query string each time) through ref cursor using concatenate function successfully. But I can see if one concatenate function is getting missed due to human intervention, it's causing failure for entaire process and returning oracle exception as "Failed - ORA-00932: inconsistent datatypes: expected - got -".
The SAMPLE clause in the select statement works well in most cases, but we found in some instances the result is way off - between 200% to 700% discrepancy has been observed.
For example, we have thee tables with the following results:
Table1: 495,365,317 rows (20 cols, unique primary key present), SAMPLE ( 0.002018712182064212 ) returns 41,499 (about four times off - we expected about 10,000) Table2: 3,350,864,539 rows ( 5 cols, unique primary key present), SAMPLE ( 0.00029843044634040336 ) returns 9,835 (this is good as it is close to 10,000) Table3: 6,974,724,543 rows ( 5 cols, no unique primary key present), SAMPLE ( 0.00014337483779250091 ) returns 58,789 (about six times off - we expected about 10,000)
The tables got billions of rows, and that is why we want to do sampling. The sample percentage rate is computed to return about 10,000 rows in all three tables.On Table3, we ran the sampling three times in one occasion, and we got "58,570", "24,575" and "24,561"
I expected +/- 20% of variance, but 200% to 700% seems to be way too much.Once again, I stress that it does work well in most cases (another 3.4 billion table and numerous smaller tables we tested were well within +/- 5 percent of the target).I noted the presence of a primary key above because I read an article saying that the SAMPLE function relies on the existence of a primary key (which does not quite explain the examples above).Is this kind of spread something we should expect or is it a bug? Is the sampling rate too small for such large tables?
My table looks like this -> Dp_value 124325 2434 3536 3536
Code is -> (case when CL.DECIMALPLACES = 0 and CL.FIELDTYPE = 'D' and cl.DATATYPE = 'N' then trim(dp."Value")*1000 else dp."Value" end) as dp_value,
What I am trying to do -> From another table cl I want to say if decimal place is 0, fieldtype is decimal (D/C) and datatype is number (N/T)then multiply the number with 1000.
If not then leave it as it is.
When I run my code I get the error: ORA-00932 : inconsistent datatypes: expected NUMBER got CHAR for else dp."Value" end)
Environment: Oracle RDBMS 10g R2. DB OS: HP Itanium
We use Oracle EBS R12.1.2 in our company and one of the analyst reported performance with saving the configuration in Pricing module. The common fix is to gather stats on BOM_EXPLOSIONS table. Recently, when the issue occurred I collected statistics on the table. The performance didnt improve. I went ahead and decided to trace the Oracle form session using the profile 'Initialization SQL Statement - Custom" at user level.
I also monitored the session in OEM 10g grid. The analyst performed the same set of steps and the performance was normal and acceptable. Analyst tried again and performance was matching with the expectation. I cleared the trace profile and analyst tried again. This time analyst had worse performance as the original issue. The issue got fixed later part of the day on its own. This has made me curious and thought to discuss it here.
I have had similar experience with 10g and 11g, when I enable the trace on the issue cannot be reproduced and when trace is off the issue pops back up.
I used the Exchange Partition feature to swap segments between 2 tables- one Partitioned, and one Non-Partitioned. The exchange went well. However, all the data in the partitioned table has gone to the partition which stores the maxbound values.
/** actual table names changed due to client confidentiality issues */
-- Drop the 2 intermediate tables if they already exist
drop table ordered_inv_bkp cascade constraints ; drop table ordered_inv_t cascade constraints ; /**
1st create a Non-Partitioned Table from ORDERED_INV and then add the primary key and unique index(s):
*/ create table ordered_inv_bkp as select * from ordered_inv ; alter table ordered_inv_bkp add constraint ordinvb_pk primary key (ordinv_id) ; -- create unique index ordinv_scinv_uix on ordered_inv_bkp( SCP_ID ASC,
[code]....
-- Next, we have to create a partitioned table ORDERED_INV_T with a similar
-- structure as ORDERED_INV.
-- This is a bit tricky, and involves a pl/sql code
-- Add section to set default values for the intermediate table OL_ORDERED_INV_T
FOR crec_cols IN ( SELECT u.column_name ,u.nullable, u.data_default,u.table_name FROM USER_TAB_COLUMNS u WHERE u.table_name ='ORDERED_INV' AND u.data_default IS NOT NULL ) LOOP
[code]....
-- Next, use exchange partition for actual swipe
-- Between ordered_inv_t and ordered_inv_bkp
-- Analyze both tables : ordered_inv_t and ordered_inv_bkp
BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME => 'ORDERED_INV_T'); DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME =>'ORDERED_INV_BKP'); END; / SET TIMING ON;
While starting up my database i am getting this error.by mistake i had moved log files, and after some time again i inserted those log files again into same directory i am getting this error.
I am executing a sql statement which is doing FTS in parallel mode The server has 8 cpus and threads_per_cpu is 2
The v$sql shows PX_SERVERS_EXECUTIONS as 8
select PX_SERVERS_EXECUTIONS, sql_text from v$sql where sql_id='0q0nk5117yth2' 8, select /*+ full(a) parallel(a)....
however the px_sessions shows 17 sessions (16 parallel session + 1 parent session (where sid = qcsid) Now in px_sessions, these 16 parallel session are divided in 2 server sets 1 and 2 and values for degree and required degree are 8 and 16 respectively
However, all the time, only 8 sessions which belong to server set = 1, were active though its state was waiting with event "PX Deq Credit: send blkd"
The other session which belong to server set = 2 were never active and always had waint event ='PX Deq: Execution Msg'
what could be the reason that 16 parallel session could not be started though I am the only person using the server, there aren't any batch jobs, dbms_jobs,even archivelogs (not a prod system)?
Note that paralel_max_servers setting is 16
Another issue being the duing start of the query approximately 100-115 blocks were read for the query (checked from longops) however after 60-70% blocks are read the number of blocks read / seconds falls down to 10-20 blocks / second across all parallel sessions.
When i open the primary database in my dataguard env,it raise error?
SQL> alter database open; alter database open * ERROR at line 1: ORA-00314: log 1 of thread 1, expected sequence# 77 doesn't match 0 ORA-00312: online log 1 thread 1: '/u01/app/oracle/oradata/oracl/redo01.log'
Is pivot the right command to use? If so, how do I do this? Most pivot examples I've looked at use an aggregate like SUM, which is not really want I am trying to accomplish here.
I'm trying to use a PIVOT on the following data set:
ID STATUS_DESC PAY_STATUS PAID_DATE TRANSACTION_TYPE TRANSACTION_DESC DEBIT TOTAL -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 9876 In Progress 2nd Payment Made 11-DEC-12 19.38.57 Card Payment Payment 2 349 349 9876 In Progress 2nd Payment Made 06-DEC-12 14.33.57 Card Payment Payment 1 100 100
However I'm still getting two rows as per the below. Ideally all data should be on a single row.
ID STATUS_DESC PAY_STATUS PAYMENT_1_DATE PAYMENT_1_AMT PAYMENT_2_DATE PAYMENT_2_AMT TOTAL -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 9876 In Progress 2nd Payment Made 06-DEC-12 14.33.57 100 100 9876 In Progress 2nd Payment Made 11-DEC-12 19.38.57 349 349
I have constructed my pivot using the following on the outer select:
PIVOT (MAX (insert_timestamp) AS paid_date ,SUM (debit) AS amt
FOR transaction_desc IN ('Payment 1' AS payment_1, 'Payment 2' AS payment_2)) ;
I've used MAX to pivot the date and also tried using NVL on the insert_timestamp but still no luck.
In my catalog for the "source" database (rman target db), I have the backupsets for a full database backup ended at Feb. 7, 03:43:37. These are online backups. So, there are archived redo logs being generated while it runs and the following archived redo logs finished at Feb. 7, 04:00:24.
We duplicate databases all the time. So, this is not a new concept for us. The one thing that has changed is that we now back up to disk (using the flashback recovery area) and then later on, initiate a backup to tape. Prior to this go-live, we did all of our backups directly to tape. The catalog does not seem confused. It knows it needs to go to tape because it's beyond the retention for disk backups. The only problem is that it is going to the backup prior to the backupset I want, only for a couple of files.
In the past, when all went directly to tape, we would do a set until time 'Feb. 7, 03:43:37' and it would automatically restore the backupset that finished then and apply archived redo logs as necessary to make a consistent copy. Now, if I use the same model, it's going to a backup set from the prior date for 3 particular files. If I change the time to when the archived redo logs ended their backup, 04:00, it still goes back to the day before, but only for 2 files.
I can list a backup of each specific file and see that the file is in the backupset for which I expect RMAN to pull. How can I figure out a date/time to go back to if not using the method of reviewing the catalog entries and timestamps?
I have a requirement to write a single sql query where i can generate the pivot report. Found some of the examples in Google search. But here we are hard coding the values if it is limited like month in this example.
i want to write similar query to represent the amount based on product type , i have around 200 types of products. I can't write case/ decode statement those many times.
query which will produce the output in pivot format , dynamically depending the number of values.
select Product, sum(case when Month=�Jan� then Amount else 0 end) Jan, sum(case when Month=�Feb� then Amount else 0 end) Feb, sum(case when Month=�Mar� then Amount else 0 end) Mar from Sales group by Product
I am joining two oracle provided basic tables emp and dept and want to show result as a department detail and followed by employee detail belong to that.
Standard result of joining two tables looks like below:
deptno dname empname 10 ACCT SCOTT 10 ACCT MILLER 20 SALES JOHN 20 SALES XYZ 30 FINANCE AAA
Now I need the output as below as per a report requirement. entity_type name/no DEPT 10 EMP SCOTT EMP MILLER DEPT 20 EMP JOHN EMP XYZ
I am using oracle 10g release 10.2.0.1.0. Can I use oracle analytic function here?