SQL & PL/SQL :: Pull In Last Loaded Data From Staging Table To Target Table?
Mar 9, 2011I have a staging table and a target table. How do I pull in last loaded data from staging table to target table?
View 4 RepliesI have a staging table and a target table. How do I pull in last loaded data from staging table to target table?
View 4 Repliesi have 2 tables in two different sources.
I have loaded data from source to destination.
some rows are missed while loading.
i want to know the missing rows
I need to insert data in Table A from Table B where most of the fields are identical and might some of the fields will be more in Table A.
ex: Table A: a,b,c,d,e,f
Table B: a.b,c,g,h
How to insert this using user_tab_columns in cursor and if I am giving the i/P as my table names . This needs to be configurable and reusable rather i mention all the fields in my logic.
We have a table with interval partition. This table is accessed very frequently. We are suppose to exchange partitions between this actual table from it's corresponding staging table.
In order to keep the newly created partitions empty, is there a way to restrict other applications from using it before we push data from staging table to it's actual table.
I have requirement as follows. I need to load the data to the target table on every Saturday. My source file consists of data of several sates. For every week i have to load one particular state data to target table. If first week I loaded AP data, then second week on Saturday karnatak, etc.
Provide code also how can i schedule the data load with every Saturday with different state column values automatically.
I have 2 similar tables. Same columns. Data inserted to second table from first for a condition and first table is truncated. Then data moved from second to first and second table truncated. Second table is to store data for a small period of time. First table has indexes. Do I need any index on second table?
The query on second table is always insert into second select * from first. Similarly insert into first select * from second.
I have a table emp_up, daily this table is uploaded by a SQL *LOADER(with REPLACE option) script run by a UNIX JOB.There is no particular timestamp column in this table. Is it possible to know when/AT what time the table is uploaded.
View 2 Replies View RelatedI've a staging table STG_TABLEA which has a primary key discount_seq_no.
I am creating a pl/sql procedure to populate a primary key (discount_seq_no) with a database sequence. The intent is to keep this populated with next value incrementing by 1.
I am using Oracle 11.2.0.2 version.
I've put together the below code, not sure on next steps...
BEGIN
UPDATE STG_TABLEA
SET A.DISCOUNT_SEQ_NO = "INSERT A SEQUENCE HERE AND KEEP INCREMENTING the seq value by 1
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
RAISE;
END;
I got Scenario that, i need to insert the number of records loaded in target table into the log table.
But the resultset is handled in the in the cursor. how to get the number of records the cursor handles.
/* Formatted on 08/03/2013 15:00:44 (QP5 v5.149.1003.31008) */
CREATE OR REPLACE PROCEDURE DASHBOARD75.SP_STG_MLY_GL_HKP_V1_00
AS
CURSOR GL_HKP
IS
SELECT CAL.MTH_NM,
CAL.YEAR,
GLM.BU_CD,
[code].......
I need to pull the 3 newest articles in a news table. Here's a list of rows including dates:
SQL> SELECT newsid, dateadded, ROWNUM from news ORDER BY dateadded DESC;
NEWSID DATEADDED ROWNUM
---------- --------- ----------
61 02-MAY-07 17
47 01-MAY-07 9
46 01-MAY-07 8
45 01-MAY-07 7
44 01-MAY-07 6
43 01-MAY-07 5
42 01-MAY-07 12
41 01-MAY-07 11
[code]....
This seems like such a basic thing to do.
I'm trying to pull all the degrees into a table based on which institution is selected. If institution is 'AAA' or 'BBB' then pull ACAD_PLAN, DESCR by ACAD_PROG where ACAD_PROG >= some value and <= some other value.
If institution is 'CCC' then pull ACAD_PLAN, DESCR by institution regardless of ACAD_PROG.
Something like
INSERT INTO table
SELECT
'value_a'
[Code].....
I don't have this formatted right cause it keep telling me missing keywords.
I am having a similar problem like above ONLY in UNIX box where my datafile is delimited by "|". The last field is ITM_CMNT declared as VARCHAR2(60) in Oracle. When I have exactly 60bytes in the last field it rejects the record saying actual 61 and max allowed is 60. If i reduce it to < 60bytes then it is stored as a value enclosed with double quotes. The enclosing double quote is on the next line.
"PROC,RAM,FLPY,HD,ACT MTX CLR DSP,D/PCMCIA,TRKBAL,LIT ION BA"
Expected: the one below is exactly 60bytes.
PROC,RAM,FLPY,HD,ACT MTX CLR DSP,D/PCMCIA,TRKBAL,LIT ION BAT
LOAD DATA
INFILE *
INTO TABLE TMPTLI_LAWSON_ITM_MST
TRUNCATE
FIELDS TERMINATED BY "|"
(ITM_NO, HAZ_MAT_CD, ITM_SHRT_DS, ITM_SON "TRIM(:ITM_SON)", ADDED_DT DATE "YYYY-MM-DD",
AVL_CD , ITM_CST_AMT, ITM_SLL_AMT,
EXCHG_PRC_AMT , ITM_UOM "TRIM(:ITM_UOM)",
PCK_QTY INTEGER EXTERNAL, SPC_HNDL_CD "TRIM(:SPC_HNDL_CD)", EFF_DT DATE "YYYY-MM-DD",
ITM_CMNT "TRIM(:ITM_CMNT)")
I have an Oracle 10g database, on the App Serv I have an image file that has 20,000 .jpg files that has an id number as each image name.I have successfully queryed the image file and posted one image to my web page matching the image id number.
sample:
select substr(spriden_last_name,1,20)||', '||
substr(spriden_first_name,1,20)||' '||
substr(spriden_mi,1,1) stname,
'<img src = "/images/&1..JPG" width="400" height="400"/>' pic
from spriden
where spriden_id = '&1'
/
the &1 is the matching id number that is input from the user.My task now is to select multiple images using a department field in the spriden table to pull the needed id numbers.I have not been successful in the proper format to pass the id number to the <img src field.
I have a table named SOURCE with about 1.000.000 rows looking like this
date varchar2 varchar2 varchar2
DATE ID1 ID2 AMOUNT
---------- ----------- --------- --------
2010-07-03 1403 1403 1500
2010-07-13 2015438 etc 188608 6074
2010-08-28 1151927 410,4222 1750
2010-08-28 13622012 41026 178.99
2010-08-28 1600246 John 65
I want to insert the rows into a table TARGET
date integer integer number
DATE ID1 ID2 AMOUNT
There are about 10.000 rows where ID1, ID2 and/or AMOUNT contains characters. These rows I don't want to insert as the columns in the target table are INTEGER. I simply want to discard these.
We have a requirement to archive and purge the tables dynamically based on the control table input. For that we have to design a control table to gather the necessary information and passed to generate the queries.
I have designed the table as below.But in this case I am not able to handle the parent and child relation ship.
Suppose one table needs to be archived and purged and that table is parent table and it is having 2 child tables, so first required data will be inserted into target table and delete from source parent and child tables. so before deleting from parent we have to delete data from all 2 child tables.
Suppose one table needs to be purged and that table is parent table and it is having 5 child tables, so before deleting from parent we have to delete data from all 5 child tables.
To handle this scenario how can I design my control table.
For archive and purge the query like this.
INSERT INTO towner_name.ttable_name
(SELECT * FROM sowner_name.stable_name WHERE condition_column<=(sysdate-30));
DELETE FROM sowner_name.stable_name WHERE condition_column<=(sysdate-30);
for purge the quey is like this.
DELETE FROM sowner_name.stable_name WHERE condition_column<=(sysdate-30);
This is my control table and I have 300 tables list to archive and purge.
CID SOWNER_NAME STABLE_NAME TOWNER_NAME TTABLE_NAME CONDITION_COLUMN PERIOD UNIT TYPE
1 wedb_au OFFER_HEADER wedb_au OFFER_HEADER LAST_DATE 30 D A
1 wedb_sa OFFER_CUSTOMER wedb_sa OFFER_CUSTOMER LAST_DATE 60 D A
1 wedb_au OFFER_SERVICE LAST_DATE 1 Y P
1 wedb_us OFFER_CUSTOMER LAST_DATE 90 D P
1 wedb_cn OFFER_CARDS UPDATE_DT 2 Y P
2 wedb_au ORDER_HEAD wedb_au ORDER_HEAD LAST_DATE 120 D A
2 wedb_us ORDER_CUSTOMER wedb_us ORDER_CUSTOMER LAST_DATE 150 D A
2 wedb_sa ORDER_HEAD wedb_sa ORDER_HEAD CREATION_DT 1 Y A
3 wedb_us DELIVERY_HEAD wedb_us DELIVERY_HEAD UPDATE_DT 50 D A
3 wedb_au DELIVERY_CARDS wedb_au DELIVERY_CARDS UPDATE_DT 200 D A
3 wedb_au DELIVERY_SERVICE wedb_au DELIVERY_SERVICE LAST_DT 100 D A
WHERE TYPE=P means insert and delete
TYPE=A means only delete
wedb_au.OFFER_HEADER is Parent Table.
child tables for wedb_au.OFFER_HEADER are wedb_au.OFFER_SERVICE,wedb_au.OFFER_BODY,wedb_au.OFFER_EMAIL,OFFER_TAX.
wedb_au.OFFER_SERVICE is child table and parent for this table is wedb_au.OFFER_HEADER
wedb_sa.OFFER_CUSTOMER Stand alone table no relationship
wedb_us.OFFER_CUSTOMER Stand alone table no relationship
[code].......
I am trying to insert rec into target table if those rec are not existing and trying to update those rec if they already exists from three source tables.I had seen in posts that merge cannot be used with cursor.
SQL> create or replace
2 PACKAGE sis_l_cpl_sis_reb_pgm_hist_pkg
3 IS
4 /********************************************************************
******************
5 PACKAGE: sis_load_cpl_sis_reb_pgm_hist
6 PURPOSE: Load CMPLY_SIS_REB_PGM_HIST with data from cmply_sis_p
h_dtl,cmply_sis_sls_dtl,
7 cmply_sis_excl_dtl(intial load)
8 *********************************************************************
******************/
[code].......
Package created.
SQL> create or replace
2 PACKAGE BODY
sis_l_cpl_sis_reb_pgm_hist_pkg
3 IS
4 /**********************************************************************
******************
5 PACKAGE: sis_l_cpl_sis_reb_pgm_hist_pkg
6 PURPOSE: Load CMPLY_SIS_REB_PGM_HIST with data from cmply_sis_pur
h_dtl,cmply_sis_sls_dtl,
7 cmply_sis_excl_dtl(intial load)
[code].......
Warning: Package Body created with compilation errors.
SQL> sho err
Errors for PACKAGE BODY SIS_L_CPL_SIS_REB_PGM_HIST_PKG:
LINE/COL ERROR
-------- -----------------------------------------------------------------
67/7 PL/SQL: SQL Statement ignored
75/19 PL/SQL: ORA-00926: missing VALUES keyword
I currently try to transfer a partition of a table from a source to a target DB. For first test purposes I take both SYS users to avaoid privilege problems. I created below procedure from code fragments out of the net.The partition CSS_201001 from table CTRL_SETTLED_SHIPMENTS shall be transferred (I tried both with already existing partition and non existing on target destination), but I always get the following error at DBMS_DATAPUMP.OPEN:
Exception breakpoint occurred at line -1 of DBMS_SYS_ERROR.pls.
$Oracle.EXCEPTION_ORA_39001:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3043
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4769
ORA-06512: at "SYS.TEST_DP", line 20
ORA-06512: at line 2
Listing:
create or replace
procedure test_dp is
-- Handle -- unique identifier for the datapump job
my_handle number;
ind NUMBER; -- Loop index
percent_done NUMBER; -- Percentage of job complete
[code].....
create table src(id number,val number,data varchar2(100)) insert into src values (1,1,'SUN');
insert into src values (2,2,'WED');
insert into src values (3,3,'MON');
create table trg(id number,val number,data varchar2(100)) required rows to be inserted in the target table.
insert into trg values (1,1,'SUNDAY');
insert into trg values (2,0,NULL);
insert into trg values (2,0,NULL);
insert into trg values (3,0,NULL);
insert into trg values (3,0,NULL);insert into trg values (3,0,NULL);
{code} based on the column value of the source table src's column val , i need to populate my target table trg . If the value of val is 1 then only one target row is created in the target .If the value of val in the source table src is 2 then the target is populated with 2 rows .The values of the target columns are mapped as follow:
1)id -as it is
2)val - if the val of src is 1 then map the val as it is .If the value of val is more than one then create as many rows as the value of val ,id will be as it is and the value of val and data will be null
3)data - if the val of src is 1 then expand the abbreviation else null .
I am trying to insert records into target table from three source tables by using function in a package and I am getting error as follows.
SQL> create or replace
2 PACKAGE
casadm.sis_load_cpl_sis_reb_pgm_hist
3 IS
4 /**********************************************************************
******************
[code]....
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00221: 'FN_LOAD1T_CPL_SIS_REB_PGM_HIST' is not a procedure or is undefined
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from various source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.
How do I remove just that data which I have loaded recently, and do not disturb the original data it already has.
Need to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just don't want to change the existing system, as lot of users use the system.
I am facing problem while inserting data into Core Table from staging (both Core Table and Staging are on same database but different schema)
I am using below command:
INSERT /*+ APPEND PARALLEL("CORE TABLE", DEFAULT, DEFAULT) */ INTO
SELECT DATA from Staging
CORE TABLE is quit big contains millions of record partition on date and having old stats of 2007. Data fetching from staging is very fast appx 1 million record in 2 mins. we are inserting one day data daily into CORE TABLE from staging and its taking 3 Hrs.
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB
Staging Schema Name :schema_UAT
Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB
Prod Schema Name : schema_PROD
What is happening now:
----------------------------
There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting :
-----------------------------
I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
I select the data from sql server column type nvarchar2 and load into varchar2 but data once loaded looks very different and can't be read.
let say name column in sqlser values is 'Rajesh' then in oracle it looks like series of square shape.
In the database we use for transfer articulation, there are numerous tables delivered with the product. The institution decided not to use certain fields, and all instances of those fields have no data. In other words, there might be a field in the table called INSTCD, but no records in the table have ever inserted any data into that particular field. In the table there are thousands of records, and we don't necessarily know which of the fields have never been used (no list has been retained and no one who initially was involved in the decisions is available to ask), as there are multiple fields in each table. How can I write a query that pulls only the fields in the table that contain data. In the example below, the SHRTRIT table contains a field called ACTIVITY_DATE, but there is no data in any record in that particular field, so I don't want it to show up on the output. In this particular case, I KNOW not to pull this field in a SELECT, but in a case where there might be 130,000 records and I DON'T know if a field has records in it, how could I do that?
if the question I'm asking doesn't make sense and I'll attempt to word it better.
create table SHRTRIT
(
SBGI_CODE VARCHAR2(6) NOT NULL,
SBGI_DESC VARCHAR2(10),
ACTIVITY_DATE VARCHAR2(10)
)
[Code]....
I am loading content of an XML file into a table using SQL loader.Below is my Control file script -
LOAD DATA
INFILE *
INTO TABLE xx_cc_response_xml_stg TRUNCATE
xmltype(XML_DATA)
FIELDS
( COLUMN_ID constant 1,
file_name filler char(4000),
XML_DATA LOBFILE(file_name) TERMINATED BY EOF)
BEGINDATA
B2B_MasterDataUpdate_20120906152137.xml
------------------------------------------------------------------------------------
The file B2B_MasterDataUpdate_20120906152137.xml is correct and XML is well formed.When i try to query for XML_DATA (datatype XMLType) column in the table, i cannot see any content in the record, and it appears as (XMLTYPE)When I parse this XML using the below,
select value(d)
from xxnbn_cc_response_xml_stg a,
table(xmlsequence(extract(a.xml_data,'/InventorySearch'))) d;
------------------------------------------------------------------------------------
I get this error:
------------------------------------------------------------------------------------
ORA-00600: internal error code, arguments: [qmcxdsSelf4], [], [], [], [], [], [], [], [], [], [], []
00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
*Cause: This is the generic internal error number for Oracle program
exceptions. This indicates that a process has encountered an
exceptional condition.
*Action: Report as a bug - the first argument is the internal error number
------------------------------------------------------------------------------------
I'm needing to pull data into a cursor, then split this data into 3 different tables, each having the same number of rows and a select number of columns from the original. i can pull the data, but then i can only access it one row at a time via FETCH, then i can't load into the 3 new CURSORS one row at a time.
View 11 Replies View RelatedI am writing a query where I'd like to pull one year's worth of data. Ideally I want to prompt for the END DATE and have the query go back in time one year from that date.
Here is what I've got after doing some research online... but It's not quite working for me.
select *
from mrtcustomer.profile
where reg_type = 'B'
and contact_type = 0
and active_ind = 'Y'
[code].....
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE:
CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT
(
CALENDAR_KEY INTEGER NOT NULL,
DAY_TIME_KEY INTEGER NOT NULL,
SITE_KEY NUMBER NOT NULL,
RESERVATION_AGENT_KEY INTEGER NOT NULL,
LOSS_CODE VARCHAR2(30) NOT NULL,
PROP_ID VARCHAR2(5) NOT NULL,
[code].....
I have a requirement to import text files which are generated from 3d modelling software xsteel where it records all geometric information and i want to import this information into oracle table.
CREATE TABLE dstv_head ( wo_no VARCHAR2(12),struct VARCHAR2(12),rev_no NUMBER,
mark VARCHAR2(12),pos VARCHAR2(12),grade VARCHAR2(12),qty NUMBER,PROFILE VARCHAR2(24),TYPE VARCHAR2(12),
len NUMBER,width_web NUMBER,width_bottom NUMBER,flange_thk NUMBER,web_thk NUMBER,radius NUMBER,kgm NUMBER,
kgm1 NUMBER,kgm2 NUMBER,bevel_plus NUMBER,bevel_minus NUMBER,holes_yn VARCHAR2(1),holes_v_yn VARCHAR2(1),
hole_x_dim NUMBER,hole_y_dim NUMBER,hole_dia NUMBER,no_of_holes NUMBER)
-- All the data which has to go under specific field for example **9005.nc1 will go into wo_no field, 1239401A will go under struct.
ST
** 9005.nc1 --WO_NO
1239401A - STRUCT
1 -REV_NO
9005 -MARK
9005 --POS
S275JR --GRADE
2 --QTY
[code]....
There is a requirement to make a table data in a database (eg: HR database) available in another database (eg: EMP database), instead of accessing it using database link. In EMP database(where data needs to be cloned), data will only be queried and no write operation will be done. Data in remote database (eg: HR DATABASE) will be occassionally fully truncated and reinserted. The plan is to do a similar truncate and reinsert of data (from HR database) into EMP database monthly once using dbms scheduler job. So basically data in just one table needs to be cloned in another database.
Question: For this situation, is a regular table or Materialized view the right choice to clone the table in EMP database and why? The table in HR database (remote database) is not very big.