Oracle 10.2.0.4 I belive the following query is incorrect in my opinion. (there is an index on col but NULLS are ignored?
SELECT COUNT(*) from <TABLE>
where col in (null,'a','b','c') this works (no errors) and returns pretty quick. However I think the correct query would be SELECT COUNT(*) from <TABLE> where col IS NULL OR col in ('a','b','c') This one takes a long time. As I see it it does a table scan for NULL part and uses the index for the rest as the index cannot be used for NULL values. explanation on this, especially why Oracle accepts the first query "where col in (null,'a','b','c')" without any issue.
I am trying to add a not null constraint on a collection for testing. It not allowed null values as below
pointers@ORCL> declare 2 type test_type is table of number not null; 3 t_test test_type; 4 begin 5 t_test := test_type(2,4,null); --null is being added 6 dbms_output.put_line('Type variable is initialized and the first element value is '||t_test(1));
[code]....
But what I observed is, i can add, null elements by using 'EXTEND' routine though 'not null' constraint is used for that type.
The below is the example.
pointers@ORCL> ed Wrote file afiedt.buf 1 declare 2 type test_type is table of number not null; 3 t_test test_type; 4 begin 5 t_test := test_type(2,4);
We need to do load testing on our DB before going live. Mainly with 5 queries which are mainly used in our web interface to pull data from the database. For example. I need to send morethan 400-500 sessions to our database using this 5 queries (with different values) and to note down the time taken with every 50 number of users..
I have approximately 3 lakh records in mtl_material_transactions table. The value of attribute3 is populated with sysdate based on some business condition.
Out of these 3 lakh records, attribute3 has 5 records which are null. The problem is with the condition 'mmt.attribute3 is null'. The query is pasted below -
select mmt.* from mtl_material_transactions mmt, mtl_transaction_types mtt, mtl_parameters p where mmt.transaction_type_id = mtt.transaction_type_id and mp.organization_id = mmt.organization_id and (upper(mtt.attribute1) ='OUTBOUND' or (upper(mtt.attribute1)='FORM' and upper(mtt.attribute2) in 'ISSUE','RECEIPT')) and 1=decode(mmt.transaction_action_id,2,decode(mmt.subinventory_code,'WIP',0,1),1) and mmt.attribute3 is NULL
small piece of PL SQL code. how to make this query.Requirement is that a concurrent program is run with parameters and one of them i_num_org_id is non mandatory so it can come as NULL...Now in an existing code which i have to change, it uses a query as
SELECT xyz FROM abc_table WHERE <various conditions> AND DECODE(i_num_org_id,NULL,1,table.organization_id) = NVL(i_num_org_id,1);
Now with the above way, if the program is run with some value for i_num_org_id or run as normal query (with NULL as the value) inside a PLSQL procedure/package then it runs fine.This query if you run in Toad etc then too it will work fine but if it is made a dynamic SQL and then used as either EXECUTE IMMEDIATE or opened as a cursor then we get a "Missing expression". I created this small anonymous block to test this and this will go into missing expression error
declare l_num_org_id NUMBER := NULL; l_temp VARCHAR2(100); l_sql varchar2(1000); begin l_sql := 'SELECT '||''''||'abcd'||''''||' [code].....
how i can reformat this query so that even if NULL value comes for i_num_org_id then it is handled.I am aware about CASE but that cannot be used in WHERE clause i guess.
I have a table with multiple rows for the KEY attribute(its not a primary key) and a Rank for each row.
I want a query which fetches one row per KEY attribute.The row with lesser Rank should be considered. But in-case if the value is null for any column the value for next Rank should be considered.
WITH TMP_TBL AS ( SELECT * FROM ( SELECT 'A' DUN,'1' RNK,'A21' col1,NULL col2,'A41' col3,NULL col4 FROM dual UNION ALL SELECT 'A','2','A122','A23',NULL,NULL FROM dual UNION ALL SELECT 'A','3','A32','A33',NULL,'A35' FROM dual [code].......
DUN is the KEY attribute . RNK is the Rank for each Row. COL1... COL4 are data attributes
I want this to be done with SQL only. So I tried various ways but none were successful.Finally I created a Multi Row function row_nvl and it worked.
SELECT DUN, row_nvl(rownvl_param_type(RNK,col1)), row_nvl(rownvl_param_type(RNK,col2)), row_nvl(rownvl_param_type(RNK,col3)), row_nvl(rownvl_param_type(RNK,col4)) FROM TMP_TBL GROUP BY DUN
But I don't think my manager will allow me to deploy a Multi Row function .
I dont want to print the repeated value(NAME) of C1 multiple times as below.
C1C2C3C4 NAMEJOHN10ABC SMITH30DEF ROBERT60XYZ
I could do it using the below query using union with the rownum.
select * from ( select rownum rn, c1,c2,c3,c4 from table_new ) where rn =1 union select * from ( select rownum rn, decode(c1,null,null),c2,c3,c4 from table_new ) where rn between 2 and 3
Is there any other way of displaying using a single sql query.
I need a generic query to generate total # of records for each table in a schema, total # of records that are not null for each column in the table, and total # of records that are null for each of those columns in those tables.
ex:
the output should look like this.
owner schema table_name total# recs in the table, column_name, ------ ------ ---------- ------------------------- -----------
# of records not null # of records null ---------------------- --------------------
My task is to test a field in a certain database. We shall refer to the field as ship_to. There is an algorithm for which the field ship_to is populated and higher up in the algorithm, a variable we shall call X is created. The algorith states that I should assign the variable to ship_to unless X is = -1. If X is = -1, the algorithm continues with multiple joins and assignments. What would be the best way to go about coding this solution. There are 4 individual paths. I have only described the first path, but they are similiar in structure. I was thinking of using either Case's or decodes?The field ship_to is a number. I have already created a statement to test the sum of the Target, but now I need to test the entire algorithm to see if it too sums the Target.
I work as a Sys. Admin. for several RHEL 3.8 servers, most of them are clusters of 2 machines. All these servers are running Oracle 9.2.0.7 database. They are running fine on a separate filesystem. So everytime the system has to be formatted for some particular reason, there is no need to re-install the database.
I am trying to make some tests by running the same filesystem containing the oracle database in a fresh Red Hat Enterprise Linux 4.8 X86_64 Install. This task has become impossible, and I'm not quite sure why.
I installed all the compat- packages required for a fresh oracle 9 install in a RHEL4 machine, but at the time of stating the STARTUP sentence, it gives me the next error:
CMCLI ERROR: OpenCommPort: connect failed with error 2. CMCLI ERROR: OpenCommPort: connect failed with error 2. CMCLI ERROR: OpenCommPort: connect failed with error 2. CMCLI ERROR: OpenCommPort: connect failed with error 2. CMCLI ERROR: OpenCommPort: connect failed with error 2.
After this error (repeated) it says something like: Cannot start an already running database.... But if i stat a shutdown sentence, it says that the instance has not been initialized....
I don't know whether i have to re-install all the oracle software in order to make a clean install in the new kernel version or not, i tried to apply a patch, and the oracle installer recognized the installation i had.
I think it might be because the original system is configured to work as a cluster, and i'm running it on only a virtual machine.
I have problems in opening the database of the physical standby in read- write mode/ read only mode. I have a primary server which is running on 2 node RAC and the standby on a seperate single server being used as DR. I recently got this server and my aim was to isolate the standby server from primary server and perform few test. As it has never been tested even once.
Primary Database spec: (2 Node Rac on ASM) Oracle Version : 10.2.0.3.0 O/s : HP-UX B.11.23
alter database recover managed standby database cancel;
Database altered.
SQL> alter database open 2 ; alter database open * ERROR at line 1: ORA-16004: backup database requires recovery ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '+DATA/dprod/datafile/system01.dbf'
Steps tried so far: Changed log_archive_dest_2 = DEFER on both the primary nodes
Standby :
startup nomount alter database mount standby database; alter database recover managed standby database disconnect; alter database recover managed standby database cancel; alter database open/readonly (tried both) Same error.
On Primary: SQL> select max(sequence#) from v$log_history;
Additional Information : There is a delay of 20 minutes before the logs get applied. which has been intentional set by team. Dataguard broker is not configured as well.
I receive the output from only one of the nested blocks:
bad PL/SQL procedure successfully completed.
SQL>
I understand that I don't need nested blocks for the example above, but this was just a condensed version of what I'm trying to do. I think nesting blocks will be easier to read and maintain, instead of having a huge CASE statement.
How can I execute only the nested block for which the condition is true and ignore the nested blocks that follow?
Are nested blocks not the correct answer here? Should I be looking at invoking procedures/functions instead?
I'm using Oracle Database 11g R2 for study purposes.Currently I'm learning about the DBCA clonic templates.I have an Oracle DB 11g R2 X86 running properly in Ms Windows XP Professional SP 3 X86, using DBCA I created the seed template file .dbc and the .CTL and .DBF files, later I copied those files to a server running Oracle DB 11g R2 X86 in Ms Windows 7 Ultimate X86. Again, using DBCA I successfully created the source database through the seed template file, everything was ok.
Now, I formatted my testing server and I installed Ms Windows 7 Ultimate X64 and Oracle Database 11g R2 X64. I copied the seed template file .dbc and the .CTL and .DBF files to "assistantsdbca emplates" directory. Well, I started DBCA to try create the source database and when the DBCA is creating and starting the Oracle instance it shows the errors:
i am testing a proc after tuning it but the problem is, it is taking a very very less time which it shouldn't. I know that it is because of the buffer cache and the shared pool. that why i need to clean the cache to retest it.
I cannot bounce the database as other schemas are part of it. so is there any way to clean the cache for that particular schema i.e bouncing any particular schema(i know that the term is not appropriate).
I am running a GROUP BY query on a few columns of enumerated data like:
select count(*), Condition, Size group by Condition, Size;
COUNT(*) CONDITION SIZE -------- ---------- -------- 3 MINT L 2 FAIR L 4 FAIR M 1 MINT S
Well, let's say I also have a timestamp field in the database. I cannot run a group by with that involved because the time is recorded to the milisec and is unique for every record. Instead, I want to include this in my group by function based on whether or not it is NULL.
For example:
COUNT(*) CONDITION SIZE SOLDDATE -------- ---------- -------- ---------- 3 MINT L ISNULL 2 FAIR L NOTNULL 2 FAIR M NOTNULL 2 FAIR M ISNULL 1 MINT S ISNULL
I want to implement a business rule such as we have for each id at most 1 dat null. So, I've created this unique index on test.
create unique index x_only_one_dat_cess_null on test(id, case when dat_cess is null then 'NULL' else to_char(dat_cess, 'dd/mm/yyyy') end);
insert into test values (1, sysdate); insert into test values (1, sysdate - 1); insert into test values (1, null); insert into test values (1, null); -- ----- insert into test values (2, sysdate); insert into test values (2, sysdate - 1); insert into test values (2, null);
The 4th insert will cause an error and this is what I wanted to implement. OK. Now the problem is that for non-null values of dat, we can't have data like this
because of the unique index (the 2nd and the 3rd row are equal). So just for learning purposes, how could we allow at most one null value of dat and allow duplicates for non-null values of dat.
SQL> Describe Stu_Table Name Null? Type ----------------------------------------- -------- ---------------------------- STU_ID VARCHAR2(2) STU_NAME VARCHAR2(10) STU_CLASS VARCHAR2(10)
now when i try to modify this Stu_id column to not null its give me error.
SQL>ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null; ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null * ERROR at line 1: ORA-01735: invalid ALTER TABLE option
and when i try to add new column with not null its also gives me error
SQL> ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL; ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL * ERROR at line 1: ORA-01758: table must be empty to add mandatory (NOT NULL) column
I have 8 columns. Some of them might be null.I want to display all 8 columns in my result. Not null columns will be first and null at the end.Here is a sample data :
Suppose that, I have two tables: emp, dept emp records the empid, emp_name, deptid dept records the deptid, dept_name
Here is a record, it's a president or some special position in company, so it's deptid is set to NULL. Here comes the question, how can I print all the emp_name with their deptartment name?
I know how to print all the emp_name with their department name if they have dept_id, but is that possible that I merge the record with dept_id NULL?
SELECT TO_DATE('21-NOV-2010') DAY, 0 RATE FROM DUAL UNION SELECT TO_DATE('22-NOV-2010') DAY, 10.5 RATE FROM DUAL UNION SELECT TO_DATE('23-NOV-2010') DAY, 0 RATE FROM DUAL UNION SELECT TO_DATE('24-NOV-2010') DAY, 0 RATE FROM DUAL UNION
I have a trigger "before update" that change some values, including a timestamp column. My sql code does an update using the "returning clause" to get the values changed by the trigger. The problem is:
When I do an "update [...] returning timestamp_field", this timestamp_field has the old value equals to null (in the trigger). And, this field in the table is not null. This problem not occurs with the others fields of type number, varchar... only with the timestamp field.
Here are the code to simulate this problem:
--Table create table TB_TEST ( ID NUMBER(10) not null, FLAG NUMBER(10) not null, TAG VARCHAR2(16) not null, TS_ATU_DTR TIMESTAMP(9) not null );
"old.TS_ATU_DTR=, " Isn't it right????? Why the other fields aren't null? I need the "old.TS_ATU_DTR" to use in my other trigger to compare timestamps, how can I get it?
MY requirment is: I want the first three nullable attributes. For Eg: If I have 60 columns in table, I need to fetch the first three null data in a row.
I have a trigger "before update" that change some values, including a timestamp column. My sql code does an update using the "returning clause" to get the values changed by the trigger. When I do an "update [...] returning timestamp_field", this timestamp_field has the old value equals to null (in the trigger). And, this field in the table is not null. This problem not occurs with the others fields of type number, varchar... only with the timestamp field.
Here are the code to simulate this problem:
--Table create table TB_TEST ( ID NUMBER(10) not null, FLAG NUMBER(10) not null, TAG VARCHAR2(16) not null, TS_ATU_DTR TIMESTAMP(9) not null ); [code]...
Why the other fields aren't null?I need the "old.TS_ATU_DTR" to use in my other trigger to compare timestamps, how can I get it? I am using Oracle 11.2.0 - Suse Linux.