I want to do a comparision for the missing rows between two diffrent tables
TBL1 and TBL2 both with the same structure but with diffrent data some data is identical. though my data is huge i wanted to make sure the technique i am using
one is "ora" it is a 8i version 2nd is "orcl" it is a 11g version
"Oracle" is the my local database. i wrote following program for comparing the row by row data in both the tables. Q)Is it BEST practice? If not let me know the best practice to compare data in tables? Q) If am not using the order by clause its giving me wrong output even though both the data tables has same data. WHY?
As part of our project, we need to perform table comparisons in two different databases. I am currently looking for various options to accomplish this.
One of them is doing minus operation between these two tables. Also, i have looked at the data compare option in toad utility.
SF at oracle.com/technetwork/issue-archive/o53plsql-083350.html states that you can compare two database tables (of the same structure) by defining a nested table type (using %ROWTYPE) and two NT variables of that type, and loading the contents of each table into its respective NT variable, before comparing them using the = operator. Having read the Oracle documentation which states that you can only compare NTs for equality if they don't contain record types, I was surprised to read this, but figured I would try it because I must be misunderstanding SF, but it didn't work.
SCOTT@ORCL> create table empcopy3 as select * from emp;
Table created.
declare type emp_ntt is table of emp%rowtype; emp_nt1 emp_ntt;
[Code]....
But SF goes on to say he timed the execution of his NT equality method, comparing it with a SQL-only equivalent, and so I must be missing something. My understanding is that using %ROWTYPE declares a record type.
I have two schemas with 149 tables in each schema, what I need to do is to prove that the content(data) between the two schemas is identical. I know that all the table names between the two schemas are the same, just need to prove that there is no difference in data.
So the query needs to prove that Schema A content = Schema B content
I know I cant do a simple select from Schema A.tab1 minus select Schema B.tab1 but since there are 159 tables, I am not sure if this is an efficient way of doing it.
I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?
To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.
I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.
I came across an implementation where data from DB2 tables are moved to Oracle tables, for BI solutioning, using some oracle procedures called from MS SQL DTS packages which are scheduled jobs.Just being curious, can this be done using OWB or ODI rather than the above detour. I suppose there are some changes being done in those procedures before the data is being loaded into Oracle tables, can't this be done using OWB/ODI? Can it be scheduled too as jobs using OWB/ODI?
I need to export only the data from schemas or tables, how to do that with Oracle Data Pump? when we use schemas parameter this export all schema, not only the data right?
I created a data warehouse in oracle 10g n with three Dimension and one cube after that it crates 4 tables . How to use an insert sql statement to insert data in those tables n how to access them.
since the orgid 1 has changed the dept from org1 to org2 I do not want this to be appeared in the final count. Results should only include the orgid 2 since it didn't changed any dept.
There could be anything after the 2nd ~ in string 2 is there a easy way of trimming string2 to the first 14 Characters? Or do I have to find the 2nd instance of ~ and then remove everything after (and including) that?
How to select the transactions out of the database that occurred within 70 seconds of each other. The toll_date field is a TIMESTAMP field.
Problem is, I seem to only get transactions that occurred within 70 minutes of each other. On the timestamp field I break the math down into the seconds in a day and I add 70. I then subtract that value and add that value to the timestamp and I should get anything between those values right?
Recently i have started working on PLSQL coding. I have a requirement. Either error or un-processed record count is 90% of to be processed records then the script has to fail. Currently I am having a situation where error count is 1 and total to be processed is also 1.
in the below V_ERR is error count V_UPS is un processed count V_PROCESSED_COUNT is total to be processed.
I am expecting PASS result but it is giving FAIL.
DECLARE V_ERR NUMBER:=0; V_UPS NUMBER:=0; V_PROCESSED_COUNT NUMBER:=0; NIN NUMBER; BEGIN V_PROCESSED_COUNT:=1; [Code] .......
I am working in form 6i, database 9i. I have datablock on table t1.
table t1: name(varchar2), date(varchar2)
datablock: name(varchar2), date(varchar2)[i have insert date with time stamp]
for date column, i am inserting date with time stamp.While querying data, user just enters only date(no time stamp), i should be able to query data. I tried in data block where condition
I have a SQL query which joins several large tables (so indexes matter here) from Oracle database. In the where condition I use IS NULL with one of the date field values. Query takes 40 sec to run and if I comment this one line...it takes 1 sec to run. This date field is an index on the table and I learn that --
1. IS NOT NULL in where clause uses an index 2. IS NULL in where clause does not use an index
Is there any work around to make the query faster...other than changing all the NULL date values in the table to some string. In other words can I force it to use the index.
the problem is: 2 tables - one with 2 million records, and the other with 8000 records.
i need to compare for each record in a table if there's a similar string on the other table.
i've created a procedure that does the following:
opens the first cursor (select col1,col2,col3,col4... from table 1) loop opens second cursor (select col1 from table 2) loop if utl_match(col1, table2.col1) > 80 then insert col1,col2,col3,col4... into tableX end if close second cursor close first cursor
the thing is that this procedure takes forever to end...about 8 days.
is it because im using the utl_match function? is there a way to speed this up?
I can give you the logic first i'll sort the start_date(already sorted in given example), then i'll compare the 2'nd id start date with 1'st id end date if it is less than the 1'st id end date, which means overlapping is there, then i'll group those 2 id's in to same group if not group them into 2 different groups.
I am new to oracle, I have request to build a query,
we have table that generates data from 7am to 20pm for eavery hour it generates 4 rows and has 43 session values as 43 columns.
Now i want to find for every hour which is the hights session value at what time. in one hour it runs four times like 7, 7:15, 7:30 and 7:45 and each row has date, time and 43 session columns in table...
I'm looking to see if there's a solution to my problem that I can use within the context of my business application interface into an Oracle RDMS. I have access to write custom SQL statements and functions, but I am NOT able to create stored procedures using the interface I have.
The challenge I am having is comparing date ranges. I have a table containing two columns labelled START TS TIME and END TS TIME, both of type 'Date'. I have figured out how to query each row against a given Next Session Start and Next Session End and determine if each row overlaps that row.
I need a procedure that will be recursive: that is, set Next Session Start and Next Session End to START TS TIME and END TS TIME of the first row, compare all rows against it, then set Next Session Start and Next Session End to the next row, compare all rows, ... for all rows in the table. I want to know what the maximum number of matches is (i.e. the most time periods that overlap).
If I could use a stored procedure I could complete this query easily. Is there other techniques (i.e. functions) available to leverage in order compare each row of date ranges against ALL rows in the same table?
we have a two different databases at different locations and on different servers, like one in our company with SID='A' and remote database with SID='B', We have recently implemented new module in database 'A' by creating lots of tables, functions, indexes, sequences, synonyms etc and now we need to install this on 'B' but the problem is we have not documented which tables we created, first we need to create a DB link between these two and then we need a tool to compare what are all the tables that we need to create in database 'B' , is there a tool for doing all this.