SQL & PL/SQL :: Insertion Of Data From One To Other Schema Table In Same Database
Aug 9, 2011
I need to insert data from one schema table to other schema's table in same database.The thing is columns are not equal.so when I am trying to use insert statement it is throwing error as not enough values. The situation is explained clearly below.The insert stmt is implemented in second schema whose table name is b.
Table created.
SQL> ALTER TABLE B ADD
2 CONSTRAINT B_PK1
3 PRIMARY KEY
4 (ID);
Table altered.
SQL> create sequence b_seg start with 1;
Sequence created.
SQL> insert into b select b_seg.nextval,lexcom.a.* from lexcom.a,dual;
insert into b select b_seg.nextval,lexcom.a.* from lexcom.a,dual
*
ERROR at line 1:
ORA-00947: not enough values
So,for table b in ID column sequence needed to be inserted and other columns need to be taken from table a.I can understand the error is because two tables are not having equal columns.So,the insert stmt is throwing error.
I can manually write by taking columns from a and b and write insert stmt as follows,but this is tedious process.
SQL> insert into b(ID,Name,rollno,address)select b_seg.nextval,lexcom.a.Name,lex
com.a.rollno,lexcom.a.address from lexcom.a,dual;
3 rows created.
But this is time taking and I had tables which has many columns to be inserted.So is there any other way to solve it and implement insert stmt.
We are getting the below error frequently from the application while doing insertion/dataloading to a table. The mentioned error is in the Primary key index
Error: 'ORA-01502: index 'INDEX_NAME' or partition of such index is in unusable state'.
I set the value SKIP_UNUSABLE_INDEXES = TRUE using the command 'ALTER SYSTEM SET SKIP_UNUSABLE_INDEXES = TRUE' to avoid this. Again we are getting the same error and Every time Iam rebuilding('alter index INDEX_NAME rebuild') the index and doing the DML Operation.
Would using a blob based schema load noticeably faster than a binary_double based schema?
CODEBlob Scenario:
Load 1 row: 5 columns (1 integer column, 4 blob columns) of size X VERSUS
CODEDouble Scenario:
Load 10,000 rows: 5 columns (1 integer column, 4 binary_double columns) of size X
While the benefit of using the rows approach is obviously the capability to query the values, I'd like quick answer concerning the loading/insertion performance. Associative array binding is used for loading from a .NET client. Also, would the answer also hold true for 200 columns instead of just 5 columns.
i have a simple insert statement in oracle form, which is sucessfully run in oracle database(sql). but it is in oracle form trigger: WHEN BUTTON PRESSED as in this format:
Declare cnt number; begin select count(*) into :control.cnt from ol_lcy_ndc where aan=:control.aan and event_id= 'ACL'; if cnt = 0 then insert into ol_lcy_ndc (form_no, aan, regno, event_id, doev, status, edt, ludt, username) values (12345, 255257,10030661,'ACL', SYSDATE, 'DRAFT', SYSDATE, SYSDATE, ' '); else update ol_lcy_ndc set LUDT= to_date('09-09-2009','DD-MM-YYYY') where aan=:control.aan and event_id= 'ACL'; end if; end;
but after giving count in cnt, it is not doing anything like insert or update from oracle form, but both the statements are correctly execute in oracle database. may problem is linked with some properties of property palette, upto my knowledge i checked: insertion allowed--> yes.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
The issue is slow insertion in particular table(i.e A Table) it means insertion in all other tables(i.e B, C, D tables) in same schema is going properly but only when i am trying to insert in one particular table(i.e A table) in same schema it takes long time to complete insertion. Daily insertion is 6000 rows.
I have check all the details like Tablespace size, Analyzing of table, Analyzing of indexes and all. There is no any error alertlog file.
I am using oracle form builder 6i and oracle database 10g
1:I have table named 'info' column name 'InfoId'and some other.And another table named 'Handing' with column names HtId, Value1 and value2.
2:I made form that consist of three data blocks, first block takes criteria and second block display record against that criteria from table 'info'.
3:i want checkboxes agianst that display record,and want that when I select some checkboxes against 'InfoId' these selected 'InfoIds'
should save in another table named 'handing' in column 'HtId'.and in same table data in column value1 and value2 will be inserted through textboxes that are in the third datablock of the thae same form .
I have two same DB schema (same structure, same data) and I need to provide update in one of them when data in the other one is updated. It is singe direction only (we change data in DB Schema A and synchronize data in the DB Schema B; there is not opposite direction). Only small portion of data (compared to the size of DB Schema) might be changed or added this way.
How to avoid Junk character insertion in oracle table. I have prepared scripts like this Say
customer - info
After insertion the data is inserted like below in production
Customer ¿ info
We are using command prompt for script execution in production environment. I am using PLSQL developer and SQL developer for development. i cannot see junk data in PLSQL developer and latest SQL developer , but its caught in old version of SQL developer. Also in Application also i can able to figure out junk data.
I'm trying to export a relatively large database but it's a bit more complicated than that.For one schema I need a full export / import (data included).
For another 10 schemas I need them empty, with the exception of a table in some of them which needs to be exported / imported with all data inside.Is it possible to do this with datapump utility (impdp, expdp)?
Afterwards I will be running some scripts to populate the DB instance with critical data / metadata.
I am trying to access and modify data of a table of another schema which contains 80,000-90,000 records. My procedure is taking near about 30 mins to complete the operation. faster access and updation of table data.
Details: I have two schema: TEST and PROD I am running the below code from TEST Schema. /* CODE START HERE*/ DECLARE exc_bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT (exc_bulk_errors, -24381); v_block_count NUMBER := 1000;
[Code]....
The above code is taking near about 30mins to process.
I have also tried another approch: Creating a procedure in PROD schema to update COMPONENT_MASTER table and by calling the procedure from above code by passing component code.
/* PROCEDURE CALL FROM ABOCE CODE INTEST SCHEMA*/ PROD.PROCEDURE_TO_UPDATE(v_comp_code);
move the tables with data present in the user scott(full) to another schema named test. In my case scott is in user tablespace and for test schema i have created different tablespace named test_tbs.
I would like to create a table in another schema(CBF) as already exist in my schema(TLC) without data but related indexes,synonyms and grants should be include.
How could I do this without using export import. I am using TOAD 9.0.1.
There is a requirement to make a table data in a database (eg: HR database) available in another database (eg: EMP database), instead of accessing it using database link. In EMP database(where data needs to be cloned), data will only be queried and no write operation will be done. Data in remote database (eg: HR DATABASE) will be occassionally fully truncated and reinserted. The plan is to do a similar truncate and reinsert of data (from HR database) into EMP database monthly once using dbms scheduler job. So basically data in just one table needs to be cloned in another database.
Question: For this situation, is a regular table or Materialized view the right choice to clone the table in EMP database and why? The table in HR database (remote database) is not very big.
With no oracle or database background, I have been tasked to figure out how to analyze reading values within xml files. I initially started by writing c# code to read through the xml files, pull out the values I need and summarize the results. Unfortunately, this took atleast 5 hours to open all the xml files and summarize the results for a small portion of EndPointChannelId's and xml files. I decided to store all the reading values within the xml files into a table so I can use SQL queries to do the analysis instead of having to process through all the files each time.
Using the same c# code as before, I have used insert and update sql statements to insert the reading values into a table. To my dismay, it takes about 3 and a half hours to import 1 file where there is 20 files a day :S. There are over 19,000 ChannelID's per file. I am currently using SQL Developer to view the data and have noticed XML Schemas and XML DB Repository but am not sure if that would be the best approach to my problem. I need to increase the efficiency of the import process but I am having difficulties. What is the most efficient way you have used to import xml files into a table structure. !
Below is a portion of the xml file that I am working with.
i have another problem with clob column , when i try to insert data in it through the stored procedure then it shows an
error- 'ORA-01460: unimplemented or unreasonable conversion requested clob' and this error especially arise when data to e inserted in clob column , have more than 4000 characters.
sql>exec pi_test(1,'sysdate','text of clob column'); ORA-01460: unimplemented or unreasonable conversion requested clob
In procedure "update_emp", i am updating a row based on p_empno and if it is not present i.e. SQL%ROWCOUNT = 0, then I am inserting that row into emp table.
where as in procedure "update_emp1" , first I am checking whether any row with that p_empno is present or not,if presentthen update the row, else raise an exception to insert the row.
In both procedure, I am doing the same thing, But I am unable to understand which one is good and why
create or replace procedure update_emp( p_empno int) is begin update emp set ename='raj' where empno=p_empno;