SQL & PL/SQL :: Automate Inserts From One Schema To Another
Dec 18, 2012
We have two databases one localdb with user rakdb and another one remotely remotedb with user rakdb .We need to be in sync with data in one table called om_item, where the users are inserting data on daily basis and the user sends us the insert script everday to run it on local databse to insert the new records in local database.I managed to create a file which records all the inserts into one text file in one directory.Can we have a scheduler to pick this text file from the specified folder and send mail using utl_mail.
CREATE TABLE ITEM (IT_CODE VARCHAR2(12),IT_NAME VARCHAR2(20));
INSERT INTO ITEM VALUES ('A','AAA');
CREATE OR REPLACE DIRECTORY MY_DIR AS 'C:TEMP';
CREATE OR REPLACE PROCEDURE it_status
[Code]..
Procedure created.
EXEC it_status
HOST TYPE c: empaaaa.txt
INSERT INTO ITEM (IT_CODE, ITEM_NAME) VALUES ('A','AAA');
COMMIT;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
how to automate a data from oracle into excel...i have a table "emp" in oracle database now i need colums of emp ex:firstname ,last name, id from that table into excel.
so i need a script which when you schedule it it should create a excel file in particular postion,i was told we have to crete a directory from sql and using utl_file then we have to write a script and then schedule that.out look in excel should be
I'm trying to create an install script to install Discoverer 10g R2 with its needed patch and opatches applied without any user interaction. I've already created the necesary response files and a batch file to sequence it. The installer should work when the it is placed on a server with the main folder shared and it does so flawlessly.
the user sees a dos window which is kindly stating that he has to wait for the primary installer to finish before hitting enter to start the patch installer.The problem I'm having is that, on slow networks, it takes a while for the primary installer of discoverer 10g to show up a window and of course the user isn't always patient enough to wait for it and hits enter before the primary installer is showing itself causing the patch installer to start before discoverer is completely installed.
Is there a way to avoid this? Or am I wrong in using a batch file to sequence this install? second problem is the needed interaction while applying opatches, can this be automated as well?
here is the contents of my batch file:
net use x: /delete net use X: \\servername\Oracle_cd\disco10gr2 /persistent:no @ECHO off cls :start
I have a proc created which dynamically creates scripts to be executed, e.g. using DBMS_OUTPUT.PUT_LINE it creates the following scripts to be executed:
Now, what I am really looking for is to explore options where we can spool the results into a file and run another proc to execute all of these proc through it.
I have migrated database from postgresql to oracle...All sequences are migrated with their default values...(Start with 1) I already have 213 entries in a table and I want to begin using this for 214th entry ( replace with "start with 214")
How can I automate the process of updating "Start with" value of sequence with the max no of entry on my table every time I migrate data....
I have created a trigger that will automatically insert the next number from the sequence into the id column.
create trigger test_trigger before insert on test for each row begin select test_seq.nextval into :new.id from dual; end; /
I have to automate TDPOSYNC utility, it is a IBM tool for oracle backup.I tried except utility of UNIX in shell script, but due to some reason same utility i could not get on production server.Not i asked to use PL/SQL to automate the same.I am facing some problem
1. How to call TDPOSYNC commands from pl/sql
2. How to pass run time input parameter to the TDPOSYNC like user/password, date rage etc.
I would like to know that how can i automate the export from production to test server. I need direction to create process to import data from production (server A) to test server (server B).
I have been plagued by people logging into my database and making changes when a clone is in process.. Having said that ,I am looking to lock accounts and unlock them when I am done.
I envision my code looking something like this:
sqlplus -s / <<END SET PAGESIZE 0 SET FEEDBACK OFF SET VERIFY OFF; set heading off; spool /tmp/lockusers.sql select 'alter user ' || username || ' account lock;' from dba_users where username not in (....) and not locked?; spool off; END;
sqlplus -s / <<END @/tmp/lockusers.sql END;
When it comes time to unlock the accounts I want to be able to unlock those accounts I previously locked and not all of them. Is there a query, I can use that can tell me when the accounts were locked or some other way about going about this so I dont unlock accounts that were locked prior to my lock script running?
We are designing a three tiered system (client, application/web server, database server) that will allow clients through a web interface to select a text file from the operating system and load that file into a intermediate table (import database table). Many users will do this concurrently and data will load into a single table. The text files come in monthly for about 100 firms. No user is able to insert or update the data of another users data (there is a check out system). Their are about 30 to 40 users that will be using the system doing various functions but it is possible for 10 to 20 users to import data at one time. The files can have anywhere from 2000 to 25000 records at a record length of 398.I am concerned about having a good design strategy as well as decent performance.
Problems with each of the Oracle loaders.
1) External tables - Can not read data text files on the application server(which is where they want the text files to go) secondly you cannot create a instance of a external table. Multiple users will be using the external table to point to different text files and loading at the same time.
2) Sqlloader - is mainly a OS level tool and I am not sure how I could programatically point it to a different text file each time a user wants to load. The client will have to have the ability through code to point sqlloader to the correct file name.
I had a creative approach and was wondering if this would work. I would like to use external tables just like a connection pool. I would propose first a scheduled OS job to move files to the database server. I would create about 20 external tables with 20 different directory objects. Using a stored procedure for the user to call and pass in file name and audit info as needed. I would use a Load lock pool table (my invention) to load the name or a code for the external table in use. The procedure loads this code into my load lock pool table when a external table is in use and deletes the name when the load is completed. The procedure would check through a series of if statements whether a particular external table was in use. If in use (exist in load lock pool table) I would check the next available external table until a external table not in use is encountered. Now potentially 20 users at one time but not likely would be laoding into the same table at one time.My questions
1) Could Oracle handle this strategy? What do I need to consider performance wise with the possibility of so many users loading into a single table at one time?
2) Do any of you maybe have another strategy to do this?
I have written a below procedure to dump the table data to .csv file.But the problem is i have 20 tables which is holding 75 studies data. Means every table will have 75 studies related data.what i supposed to do is i need to export the data from 20 tables for each study. but this procedures requires me to run the procedure 75(studies)*20(tables) times. is there any technique instead of i manually giving the table name and study name , will it take from any text file where we defined 75 studies in that. or easy there any better way .
create or replace procedure dump_table_to_csv1(p_tname in varchar2, p_dir in varchar2, p_filename in varchar2) is l_output utl_file.file_type; l_theCursor integer default dbms_sql.open_cursor; [code]........
i have multiple inserts to make in a table that is in an Oracle database...i already try several ways to do it but it always giving erros... how to make multiple inserts at same time.
I'm on 11.2 DB and need to create an audit table that will be populated by DB triggers of other tables (after Insert,Update and Delete). The triggers will only ever be inserting data into the audit table. I have read that for insert only tables, you should define the 'pctfree' as 0. Is this correct? Do I need to set any other params (like pct_used) for tables only ever being inserted into?
My database is 11.1.0.7 and 11.2.0.3 with TDE tablespace encryption, ASM db storage. The wallet needs to be opened for MRP to work in physical standby database. I already have the solution for the primary instances to automate wallet open (e.g. using startup trigger for 11.1.0.7). However, I cannot find solution to automate wallet open operation in standby instances (to issue ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY ""').
Manual operation everytime standby instance is started is not feasible.
I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.
I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.
I have a data in one table with 6 columns where user may be updating values in all of these 6 columns or he may enter 3 or 4 columns based on that inserts should take place, this is similar to my previous thread , i am using if condition to check column for null if its not null then i will make a insert , but is there any other easier way to do this.
insert into ot_po values ('ss-po',1,ph_sys.nextval); insert into ot_inspect_head values (inh_sys.nextval,'ss-ins',1,'ss-po',1); commit; select * from ot_inspect_item
--Now if the inspection user issues the update statement , it will delete this row --from ot_inspect_item and reinserts the values with values based on --ii_flex_01,ii_flex_02,ii_flex_03 [code]...
I have prepared shell scripts to do the parallel inserts on my DB table (LEGACY_SYSTEM).
There is a trigger (AFTER INSERT ON EACH ROW) associated with the above table. I am calling a package.function inside the trigger to do the required operation and finally it will insert records into my target table (PRICE_CHANGE).
Expectation: ------------ If I insert 10 rows into LEGACY_SYSTEM table, it should do few updates and finally insert 10 rows into PRICE_CHANGE table.
Result: ------- 10 rows got inserted into LEGACY_SYSTEM. All the updates are successful but I could see only 4 rows in PRICE_CHANGE table. If I run it for the second or third time, all the results will be perfect.
Instead of these shell script, if I insert one by one rows manually into LEGACY_SYSTEM table, I am getting all the expected results and the results are consistent. If you look at my scripts below, you will understand the problem better..
I am calling test_global.sh through the UNIX session and all the records got inserted into LEGACY_SYSTEM table and few rows are missing from PRICE_CHANGE table.
If I remove the '&' symbol and execute, the results are perfect. But the requirement is not to remove the '&' symbol. I have been facing this problem for the past 1 month.
I have a plsql Proc, which accepts a few parameters and inevitably loops through a cursor and runs a bunch of insert statements. With quite a few IF conditions.
Each insert statement has a value which i want to increment by (+1) every time an insert statement is executed in the same loop.. This is for a student housing database and this is for their room preferences so 1 is the first, 2 is there second preference e.t.c.
Please take a look at the code below: in the Insert values() I have put a? Where I want the number to increment from.
There are a lot more inserts which I haven't put below. I hope I have made myself clear as this has been quite difficult to explain. So for example if the 2nd two inserts are run, then I was the first one to insert with a 1 and the second with a 2.
BEGIN FOR rec IN c1 LOOP IF c1%FOUND THEN INSERT INTO table (PK_A, fk_rms_id, application_type, application_person_type) VALUES (NULL, rec.pk_rms_id, app_type, app_person_type) RETURNING PK_APPLICATION_NO INTO x; [Code] ........
I am trying to create a procedure that inserts parameters into a table and then returns the number of rows inserted back to calling block. the procedure is compiling fine but is not returning the number of rows inserted. My code is as follows;
STORED PROCEDURE CREATE OR REPLACE PROCEDURE CarMasterInsert_sp ( registration IN VARCHAR2, model_name IN VARCHAR2, car_group_name IN VARCHAR2, date_bought IN DATE, cost IN NUMBER, miles_to_date IN NUMBER, miles_last_service IN NUMBER, status IN CHAR, rowsInserted OUT NUMBER) [code]....
I'm running 11.2.0I am looking at tuning a sql statement, and the question was brought up as to the max inserts per transactions in 11g, and if it exceeds 1000.I haven't found a solid answer yet, but I thought that 10g was higher than 1000.
My first thought was to implement a commit loop on every 1000 rows, as that is how things were handled in the past.But I found an article that talks about redo logs and performance and how it's a horrible practice to do the commit loop.
What I haven't found is what is the better methodology in doing this?My scenario could encounter inserts as much as 20,000 at a time.
Find an appropriate script to automate Oracle DBs in one server? This db server have 6 instances. We always done the starting up and shutting down manually, although we have a reference script that does this but in Oracle v7.3.4. We do want to include the automatic start/stop of dbconsole for accessing it via OEM.
I am running a custom script that creates about 100,000 rows of demo data.
The table I am loading in to is fairly wide (100 columns), and only has about 10,000 rows at the moment.
The script goes really fast for the first 10K rows (100 inserts per second), and then incrementally gets slower. By 20,000 rows it is doing about 1 row per second. At this rate, it will never finish!.
Each insert is a separate statement, using bind variables and wrapped in a single transaction. I've tried dropping the indexes first but it didn't make a difference.
OEM shows it's 100% CPU bottleneck with no other information I can glean.
During batch process record is entered in detail table as well as summary table.
The process first checks if record exists in summary table for same group_no and if 'yes' then "updates" the record with the newly added amount (sums it) else inserts a new record Whereas in the detail table it inserts the record directly
Now if the batch process runs in parallel, (out of many) two different sessions insert same group_no; This is because while sesond session inserts a record, first session inserting the same record (group_no) has not yet committed ; So second session Not knowing that already there is same Group_no (101) inserted, again inserts another record with same group_no rather than summing it.
Can it be solved without using temp table, select for update?
I have a table named student_details with columns "NAME","ADDRESS","COURSE" with several rows of data already insertedI have to add one more column "ID" which increments automatically.
I tried to do this using SEQUENCE but no values got inserted for already existing rows in "ID". how to write a script that automatically increments and inserts values for already existing rows also.
I was given a task by manager to keep track of changes on a given table including os_user who made it.Should I create a trigger on it (on any update, insert, delete etc.) or there is a better way of doing it ?I think there could be some info already in some data dictionary views or something like it.
I have a Bash script that counts the rows of a csv file, extracts the fields and makes inserts in a sql file. Then it logs into SqlPlus and calls the insert file. The sql file looks like this:
I rely on "WHENEVER SQLERROR EXIT" for things to go the right path. However sometimes because of the contents of the CVS files (which I can't control) some rows don't get inserted but SqlPlus doesn't see that as an error, doesn't exit and I end up with the wrong number of rows being informed in the second insert.Is there some kind of "if-then-else" construct in Sql? After all the inserts are made, do a "select count (*)" and compare that number to the one informed by the script. If they match, make the final insert and commit; else exit.
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema