I need to write a code which dynamically reads the table name from user_tables (starting with cd) & load the data from remote database which has the same table name & structure based on the load number..
Static PLSQL statement would be :
declare
BEGIN
INSERT INTO "cd_patient" C-- DYNAMICALLY GET THE TABLE NAME
SELECT C1.PAT_MRN -- DYNAMICALLY GET THE COLUMN NAME
FROM REMOTE_DB.CD_PATIENT@XXX C1
WHERE C1.LOAD_NUMBER > 2;
I tried to write a dynamically changing filename and file type so that i can dump some data. But the code I wrote is not producing any file at all even though it compiles with no errors. I run this code at XE database which comes free from the oracle website. I did all the directory settings. No problem with it because I can produce file with different codes. The code is as following:
CREATE OR REPLACE PROCEDURE dump_csv_file IS
TYPE number_array IS VARRAY(10000) OF NUMBER; TYPE string_array IS VARRAY(10000) OF VARCHAR2(100); TYPE date_array IS VARRAY(10000) OF DATE; TYPE v_file_array IS VARRAY(10000) OF UTL_FILE.FILE_TYPE;
I Would like to delete a records based on the parent / child relationship. So I would need to delete a record from parent table , first delete inner most child record then other child then master. To write generic way of Oracle 10g PL/SQL code.
AWR Report regarding ' SQL ordered by Physical Reads (UnOptimized))'. SQL ordered by Physical Reads (UnOptimized)UnOptimized Read Reqs =
Physical Read Reqts - Optimized Read Reqs %Opt - Optimized Reads as percentage of SQL Read Requests %Total - UnOptimized Read Reqs as a percentage of Total UnOptimized Read Reqs Total Physical Read Requests: 3,311,261 Captured SQL account for 13.5% of Total Total UnOptimized Read Requests: 3,311,261 Captured SQL account for 13.5% of Total Total Optimized Read Requests: 1 Captured SQL account for 0.0% of Total UnOptimized Read ReqsPhysical Read ReqsExecutionsUnOptimized Reqs per Exec%Opt%TotalSQL IdSQL ModuleSQL [code]....
I want to know what is the mean by 'UnOptimized Read Reqs' and 'Optimized Read Reqs 'Question 2) what is 'SQL ordered by Physical Reads tuning'.
In a pl/sql procedure code, I created a normal table (create table) using dynamic sql. Then I used that table in procedure for further processing. But while compiling, it gave error that table does not exist. I can understand that he table is not present in DB, so the error came. But at the same time I need to create a table dynamically, use it and drop it. Does it mean that I need to make every query referencing that table as dynamic ??
I am having performance issues on a query in a production environment that I cannot replicate in our test environment. Our test environment is an import of production. Version information is:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi PL/SQL Release 10.2.0.3.0 - Production CORE 10.2.0.3.0 Production TNS for Solaris: Version 10.2.0.3.0 - Production NLSRTL Version 10.2.0.3.0 - ProductionWhen I run the query in test, I get the following results using TKPROF [code]....
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWSThis performance problem started a few weeks ago and the problem seems to be in the high number of consistent reads during fetch. The DBA tried restarting the instance and gathering fresh stats on the tables but that has not made a difference. Now he wants to export the tables, drop the schema and re-import.
i need to create a replica of an existing table in same schema dynamically so as to reflect the modifications on replica table which has been done on primary table.this table we need to maintain as snapshot,like if a column is added or deleted then it should automatically be added or deleted on the replica table.how should i approach this.
I have to write a trigger on a table which contain lot of parameters.But i need to pick a specific row and check that without disturbing other stuffs.Is there a way to write Before update trigger on a particular rows filtering the unneccasary rows.
The requirement is when user update the date from front end (Java application) the trigger should check the date and validate that it should be month end date. For example.
1)04/21/2012 wrong date 2)04/30/2012 correct date 3)03/29/2012 Wrong date 4)03/31/2012 correct date
I am new to PL/SQL, worked mostly on SQL server, I have to change the table name dynamically based on the parameter.and used a ref_cursor to display the results in a report. when I execute it throws me an error.
create or replace procedure test1 ( p_eod_date IN VARCHAR2, p_link IN NUMBER, c_rec IN OUT SYS_REFCURSOR) [code]....
i need to get values from lookup table dynamically,i am getting the missing keyword error.
create or replace procedure xyz(cur1 OUT SYS_REFCURSOR) AS vsql varchar2(2000); CURSOR CURSOR1 is SELECT DQS_SRC_COL_NM,LDIC_SEQUENCE FROM look_up WHERE LDIC_SOR ='friend'; BEGIN [code]....
Create a function which will indicate if a given record in a table is unique or not. Unique means the data is occurring only once in the entire table.
Function should be in this signature
function IS_UNIQUE (tableName in varchar2,tableAttribute in varchar2) return number ..... begin //logic to check if given data is unique return 0; //return 0 if data is unique else return 1; //return 1 if data is duplicate end;
Once I run this query
select attribute1 from table1 where IS_UNIQUE(table1,attribute1)=0
All records of attribute1 which are unique need to be fetched. Similarly, select attribute1 from table1 where IS_UNIQUE(table1,attribute1)=1 should return all records of attribute1 which are duplicates.
Is there any way to generate columns dynamically by depending on the rows in a table in 11G .
Ex: If the deptno in DEPT table is not constant,then how to generate the N numbers of columns based on the deptno. Below query is working when we hard coded the deptno (10,20,30,40).What else if we more number of departments and we don't know the departments also.
Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 Connected as dbo
SQL> SELECT * FROM (SELECT deptno, job, sum(sal) sal FROM SCOTT.emp GROUP BY job, deptno) PIVOT(sum(sal) FOR deptno IN(10, 20, 30, 40));
My understanding of DB_FILE_MULTIBLOCK_READ_COUNT parameter is that it affects only Full Table Scans and Fast Full Index Scans - all other disk retrieval is single block.If so, then maybe I'm reading this trace incorrectly:
select /*+ first_rows */ pk from test_join_tgt where pk >= 0 and rownum > 1
I am writing following query SELECT DISTINCT a.list_type_code, a.list_type_name FROM jls_list_type a, jls_list_control b WHERE b.jalsa_srl = :jalsa_srl AND b.list_no != a.list_type_code ORDER BY list_type_code
I just want to display only those records from JLS_LIST_TYPE which is not present in other table JLS_LIST_CONTROL ... for this i wrote above query but it is not working.
I am trying to write a trigger on a new table. (dest_test) This is the first trigger that I have ever attempted (fairly new DBA) and I am having some trouble with the trigger body.It is a before insert trigger that will need to select from another table (dest) for a particular value being inserted (destination).
create table dest_test ( destination varchar2(4) not null, db_name varchar2(10) not null )
desc dest
[code]...
I am getting the exact opposite results than I want, though. If the value appears in dest, it is inserting into dest_test... NOT whatI want it to do!If the value doesn't appear in dest, it is throwing ora-6512 and ora-4088 errors. Is there a way to suppress these errors, or to graceful exit from the block so that the trigger completes without throwing these errors?
PROMPT CREATE TABLE tst_fetch_vendor_data CREATE TABLE tst_fetch_vendor_data ( vendor_data_seq_no NUMBER NOT NULL, study_seq_no NUMBER NOT NULL, vendor_record_seq_no NUMBER NOT NULL, control_column_seq_no NUMBER NOT NULL, resolved_value VARCHAR2(4000) NULL, original_value VARCHAR2(4000) NULL, transaction_user VARCHAR2(30) NOT NULL, [code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data (select * from vendor_data vd where vd.control_column_seq_no in (select control_column_seq_no from temp_control_column)); dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100, METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
I have no knowledge about Barcode. The problem is an issue of Loyalty Cards of a Hotel and Restaurant to various customers and then these cards will be presented by the customers time to time in the Hotel as well as Restaurant. The Owner of the Hotel and Restaurant wants to generate separate barcode for each card and when this card will be presented then the bar code reader will readout the code and the system will calculate the amount of discount/rebate. Because if the data entry operator enter the code of the card through key board the it will be a chance of leakage or misuse of that card.
I have to compare my SVN source code (packages, views etc) with the production code in the database like views etc (actually we are not sure that what we have in the svn is the final version of production code, we have objects created in the production database, but we don't have latest scripts for that. we have to deploy the svn code in the UNIX box).
So here the comparison is between the OS files and the database objects.
I thought I would get scripts of all the packages, views etc from the production database by using DBMS_METADATA or some utility and save the code in OS files then compare one svn file with OS file manually by using some comparison tools e.g toad provide one comparison tool.
Snapshot_date is last working date of that month, and is recorded only once in the table for a matrl_num & plant combination.
Inserting records in the table: Refer to insert_extended_values.sql Note: This is just a sample table , having records for only two matrl_num.
after inserting records , you can see the records are for two different matrl_num's. For each matrl_num , we can have multiple values of plant and for each plant we can have different value of std_Cost(eg:matrl_num='0023173556').
Moreover for each matrl_num , I have some records in each month , based on the number of plants(with std_cost). Std_cost for a plant is updated in the table whenever there is a change in it from the previous value.In case of no change in value of std_cost over previous month's value , we have 'Null' std_cost for that matrl_num, plant & month(next month).
I want to update these 'Null' values with latest(last month's 'not null' - std_cost) std_cost for that matrl_num and plant. Such change in std_cost can occur multiple times , so we need to update the 'Null' std_cost till the time we have encountered a change in std_cost and so on.
We have certain records like SQL, PL/SQL, Reports, Forms, OAF etc in a table. We wanted to capture rating for each of these criteria. So we want a form to be displayed dynamically..
create table revenue ( person varchar2(23), month varchar2(3), rev_amt number )
and i have data in a file like below
Person Jan Feb Mar Apr Mai Jun Jul Aug Sep Oct Nov Dez -------------------------------------------------------- Schnyder,345,223,122,345,324,244,123,123,345,121,345,197 Weber,234,234,123,457,456,287,234,123,678,656,341,567 Keller,596,276,347,134,743,545,216,456,124,753,346,456 Meyer,987,345,645,567,834,567,789,234,678,973,456,125 Holzer,509,154,876,347,146,788,174,986,568,246,324,987 Müller,456,125,678,235,878,237,567,237,788,237,324,778 Binggeli,487,347,458,347,235,864,689,235,764,964,624,347 Stoller,596,237,976,876,346,567,126,879,125,568,124,753 Marty,094,234,235,763,054,567,237,457,325,753,577,346 Studer,784,567,235,753,124,575,864,235,753,864,634,678
i want to load it into the table in the following way.
Person Month Revenue ------------------------- Schnyder Jan 345 Schnyder Feb 223 Schnyder Mar 122 Schnyder Apr 345 Schnyder Mai 324 Schnyder Jun 244 Schnyder Jul 123 Schnyder Aug 123 Schnyder Sep 345 Schnyder Oct 121 Schnyder Nov 345 Schnyder Dez 197 ........ ... ... How to write control file to load this data into the above revenue table.
I am trying to load one parent-child hierarchy table. I have a table XXX, which contains the columns containing parent, child , level and many more columns in it. Now I need to use the table XXX to load my parent-child hierarchy table. How can I perform this using SQL.
i have a existing table called table_1 to did some changes to it, but i need to do a source search to find where all in the code that we reference this code to ensure that there is not a variable declaration that sets this to a specific number. how do i do a source search. i alter a existing column (overbook_max) to number(2) tonumber (3)