CREATE TABLE tmp_guid (
c1 raw(16) not null
,c2 raw(16) not null
);
begin
[code]...
It seems that a combination of a unique index and extended stats are to blame. Removing any one of them causes the query to also produce correct results.Extended stats basically captures the fact that despite being unique, c1 depends on c2.
We just upgraded to 11g and have run into incorrect results for some of our LEFT JOINs. If the table, view, subquery, or WITH clause that is being LEFT JOINed to contains any constants, the results are not correct.
For example, a test (nonsensical) view such as the following is created:
create or replace view fyvtst1 as select spriden_pidm as fyvtst1_pidm, 'Sch' as fyvtst1_test from spriden where spriden_last_name like 'Sch%' ;
When I run the following query, I get correct results; that is, only those with "Sch" starting their last name are listed.
select spriden_pidm, spriden_last_name, fyvtst1_pidm, fyvtst1_test from spriden join fyvtst1 on fyvtst1_pidm = spriden_pidm ;
However, when I change the JOIN to a LEFT JOIN, the last column contains "Sch" for all rows, instead of NULL:
select spriden_pidm, spriden_last_name, fyvtst1_pidm, fyvtst1_test from spriden left join fyvtst1 on fyvtst1_pidm = spriden_pidm ;
We've discovered other quirky things related to this. A WITH clause with similar logic as the above view, when LEFT JOINed to a table will also cause the constant to appear in each row, instead of NULL (and only the value where there is a join). But when additional columns are added to the WITH, it behaves correctly.
This is easy enough to rewrite - but we have WITHs and views containing constants in numerous places, and cannot hope to track down every single one successfully before the incorrect results are used.
Finally, the NO_QUERY_TRANSFORMATION hint will force the query to work correctly. Unfortunately, it has a huge negative performance impact (one query ran for an hour, vs. 1 second in 10g).
dbms_metadata is giving me code that will create more datafiles than the original, and furthermore won't run: firstly, because two files are named with a null string, and secondly because it includes RESIZE commands which won't work because they nominate OMF file names. Or is dbms_metadata unreliable?
My problem is I have 3 tables (TEST_TBL1, TEST_TBL2, TEST_TBL3). TEST_TBL2 and TEST_TBL3 are in remote database and I use database link to join them. The following query returns incorrect result (I seems that it ignore the where clause)
SELECT * FROM TEST_TBL1 JOIN TEST_TBL2@db_remote USING (KEY1) JOIN TEST_TBL3@db_remote USING (KEY2) WHERE KEY1=XXX OR KEY2=YYY;
I am on 11R1 (11.1.0.7)
FOR EXAMPLE:
Local database: CREATE TABLE TEST_TBL1 ( KEY1 NUMBER(5) NOT NULL,
I have the following Union All query. It throws the following error in SQL plus
ERROR at line 27: ORA-01789: query block has incorrect number of result columns
After doing some google for the above error it suggests there are incorrect number of columns in the Union All query.I could not figure out the exact location well SQl Plus says error is on line 27 at the first opening bracket like
I have created a table: INDEX_SIZE_TRACKING with the following attributes
Index Name: name of the index. Type: VARCHAR(255). This is the primary key of the table.
Allocated Space: the memory space (in bytes) allocated to the index. Type: NUMBER
Used Space: the memory space used by the index. Type: NUMBER
Last Update: the time when index details are updated to this table. Type: VARCHAR(255)
I want to write a PL/SQL script to query index statistics data and update tracking entries in the INDEX_SIZE_TRACKING table. If there is no existing entry for the index, create a new one; otherwise, update the existing one. have a PL/SQL statement that can do this in oracle XE?
select distinct event_number from events_total WHERE event_id = 16395493 minus select distinct event_number from event_details_ford WHERE event_id = 16395493
result of which is : 6L2Z-7861693-AAC 6L2Z-7862187-CAC
i want to put this in dynamic sql where clause :
where event_number in ('6L2Z-7861693-AAC','6L2Z-7862187-CAC'). and if the result of the query is only one number, then where event_number in (6L2Z-7861693-AAC) result of the query can be NULL also. but i think i can use IF condition for "SELECT ..WHERE event_number in " query
so how can i put the results of query in one line, so that i can use it in where clause.
I have this query that returns results that contain duplicates(somewhat). I only want either the FIRST or LAST (either one is fine). Here is the query:
select unique PLLA.attribute4, PLA.item_description from po_lines_all PLA, po_line_locations_all PLLA where PLLA.po_line_id = PLA.po_line_id and PLLA.attribute4 is not null
So my output is something like this:
RCE12 This is an item for AUL1 RCE13 This is an item for PWEILL RCE14 This is an item for AUL1
I just want either the RCE12 or RCE14 record and not both since they both have the same description.
I am new to writing queries for an oracle database and I was giving a bit of challenge. Here's the situation. I have 3 tables I am using. 2 of the tables are being used to transpose people's names from rows to columns by account number (there are multiple people associated with each account). The last table is a pretty straight forward query. I can run each query by itself and I get the results I want. But when I try to compile the two together, I start getting a variety of errors. Below is the two queries:
Query 1 (returns about 1,500 rows): SELECT DISTINCT CAST (EIS_DW.ACCTCOMMONLOAN.ACCOUNT as VARCHAR(20)) as ACCOUNT_NUM, EIS_DW.ACCTCOMMONLOAN.ACCOUNT_STATUS, EIS_DW.ACCTCOMMONLOAN.MINOR_DESCRIPTION, EIS_DW.ACCTCOMMONLOAN.OWNER_NAME, [code]......
Ideally, I want to join these two queries on ACCOUNT and ACCTNBR. I have tried working my first query into my second query, but the best I get with that, is the 570 or so accounts, not all the accounts.
SELECT d_dtm, BTS_ID, CASE WHEN D_DTM = (D_DTM-24/24) THEN sum(V_ATT_CNT)
[Code]....
But it is not returning any results because of the "having" clause. I know it should return results because all I want it to do is in one column have the V_ATT for the current time period, and in the 2nd column, have the V_ATT 24 hours ago. I've checked the data and I should get results back but can't seem to figure out why this is not working...
When I execute the query, it returns the data (approx - 40,000 rows) in 1 min.But when I try to insert this data into another table (or create a table of this data) it takes me about 2 hours.
Tried using Materialized view, its again the same the refresh takes 2 hours.Basically here, what I am trying to do is the data from the above query is used to update the values in another table.What ever the procedure I am trying it takes 2 hours.
I have a need to query a real time production database to return a set of results that spans a three day period. When the three days are consecutive it's easy but sometimes there is a 1 or two day gap between the days. For example I'm querying results from a group of people that work between Tuesday and Saturday. On a Wednesday I need t produce a set of results that spans Tuesday of the current week, and Saturday and Friday of the previous week; on Thursday I need to produce a set of results that that spans Wednesday and Tuesday of the current week and Saturday of the previous week.I'm using SQL Developer to execute the code.
this is procedure to send email to multiple receipents
CREATE OR REPLACE PROCEDURE mail ( subject IN VARCHAR2,recievers VARCHAR2) IS sender VARCHAR2(30) := 'mailid';
[Code]...
i got procedure sucessfully created
execute mail('hi','mail id1,maildi2');
when i execute the procedure i get the following error
Error at line 1 ORA-29279: SMTP permanent error: 554 5.5.1 Error: no valid recipients ORA-06512: at "SYS.UTL_SMTP", line 17 ORA-06512: at "SYS.UTL_SMTP", line 98 ORA-06512: at "SYS.UTL_SMTP", line 271 ORA-06512: at "SUPPLIER.MAIL", line 47 ORA-06512: at line 1
There are several stages for sql processing in 10g2 database concept document.The following stages are necessary for each type of statement processing:
■ Stage 1: Create a Cursor ■ Stage 2: Parse the Statement ■ Stage 5: Bind Any Variables ■ Stage 7: Run the Statement ■ Stage 9: Close the Cursor Optionally, you can include another stage: ■ Stage 6: Parallelize the Statement
Queries (SELECTs) require several additional stages, as shown in Figure 241:
■ Stage 3: Describe Results of a Query ■ Stage 4: Define Output of a Query ■ Stage 8: Fetch Rows of a Query
Stage 3: Describe Results of a Query The describe stage is necessary only if the characteristics of a query's result are not known; for example, when a query is entered interactively by a user. In this case, the describe stage determines the characteristics (datatypes, lengths, and names) of a query's result.
Stage 4: Define Output of a Query In the define stage for queries, you specify the location, size, and datatype of variables defined to receive each fetched value. These variables are called define variables. Oracle performs datatype conversion if necessary.
I still don't understand what's Stage 3: Describe Results of a Query and Stage 4: Define Output of a Query.
I need to generate a select query in runtime and store the results of it into a file.Each time the column name and table name in the query will differ.Now im able to generate the select query through for cursor but problem is to store the results to the file.I tried using plsql table,im able to get the values to that table and store the results to a file,but the results of the query is more then 10000 lines (it might increase also)where only 4000 characters where able to store in the plsql table.so rest of them are not stored in the file.
I have an application connected to Oracle 11g that sends its own querys to the db based on what the user is clickng on. The applicaiton is connected via one user id and I was wondering, is there a way that I can capture the tiem each query starts, the sql itself, and the amount of time it took to fetch the data?
Id Country city 1 US 2 US Boston 3 Boston 4 US Newyork 5 London 6 Japan Tokyo
Im looking for a query which returns results based on both city and country passed.
If i pass country US and city Boston it should return row2 with US and Boston row If i pass country null and city Boston it should return row3 If i pass country UK and city Boston it should return row3 If i pass country UK and city London it should return row5
i.e. If country/city combination exists in DB return that row Else city row should be returned.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have a database in which DB extended auditing is enabled but there are no audit specifications in privileges or statements or objects. So what will be audited in that case.
We use extended RAC 11gR2 on two servers in two different server rooms.Data are stored on ISCSI devices, mounted from "storage" linux hosts on a network called "SAN".Third voting disk is also stored on a ISCSI device mounted from a "quorum" linux host on our "PUBLIC" network.The RAC interconnect link is on a separate "PRIVATE" network.
During tests, when we totally disconnect the "SAN" link between the two servers, clusterware has an undefined behaviour.Sometimes (more often) it crashes, sometimes one node stays alive.At this moment, each RAC server is able to access only the ISCSI storage in its own server room, and the two servers are able to access the third voting disk (because it is on a separate network).
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
I am having 2 data centers within 1/2 km distance. And we kept each vote disk in once site. In case of site failure, the cluster needs a 3rd set. We don't have 3rd site.
Last open world, Oracle announced that oracle storage on cloud as announced. If I get the small space on the cloud, can i put my 3rd vote disk on the cloud?
Update employee e set e.dept_id = (select d.dept_id from dept d
[Code].....
The above is not the exact code which I am executing but an exact replica of the logic implied in my code.
Now, when i display the value of 'rows_updated' it returns a value greater than 0,i.e 3 but it should ideally return 0 since there are no records matching for the condition: (select d.dept_id from dept d where e.dept_name = d.dept_name)
So, I executed the statement: select count(*) from employee e where emp_id = 1234 and exists (select 1 from emp_his ee where e.emp_id = ee.emp_id) and the result was 3 which is the same value returned by %rowcount.
why this is happening as I am getting incorrect values in %rowcount for the number of rows updated.
I currently have around 560 rows of bad data where
GL_Encumbered_Date and TO_CHAR(PRD.GL_ENCUMBERED_DATE,'YYYY' display correctly.
The problem lies in
PRD.GL_CANCELLED_DATE which displays correctly (example: 30-APR-10) but TO_CHAR(PRD.GL_CANCELLED_DATE, 'DD-MON-YYYY') for the same row displays as 30-APR-0010.
I have separated out the incorrect year with:
to_char(prd.gl_cancelled_date,'YYYY') where each of the 560 rows display 0010 instead of 2010. I am unsure if arithmetic with dates is allowed within SQL- but am curious if the operation date + number will result in a date.
If this is so, how could one go about taking 0010 as a date and add 2000 to create 2010 for: PRD.GL_CANCELLED_DATE.
Would like to ask expert here, how could I insert data into oracle table where the value is 03 ? The case is like that, the column was defined as varchar2(50) data type, and I have a csv file where the value is 03 but when load into oracle table the value become 3 instead of 03.