I am trying to insert huge data into another huge table which is almost taking around 2-3 hrs. See my below query
INSERT /*+ APPEND *//*+ NOLOGGING */ INTO DB1.Table1 SELECT * FROM DB2.Table2 ;
COMMIT;
Both Table1 and Table2 have same structure and table1 is master table having 100 Billion records and table2 having 30 Million records. This is a direct insert where each day this operation carried.
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
I am facing problem while inserting data into Core Table from staging (both Core Table and Staging are on same database but different schema)
I am using below command:
INSERT /*+ APPEND PARALLEL("CORE TABLE", DEFAULT, DEFAULT) */ INTO SELECT DATA from Staging
CORE TABLE is quit big contains millions of record partition on date and having old stats of 2007. Data fetching from staging is very fast appx 1 million record in 2 mins. we are inserting one day data daily into CORE TABLE from staging and its taking 3 Hrs.
CREATE OR REPLACE procedure fast_proc (p_rows out number) is TYPE object_id_tab IS TABLE OF all_objects.object_name%TYPE INDEX BY BINARY_INTEGER lt_object_id object_id_tab; CURSOR c IS
[Code]....
Warning: Procedure created with compilation errors.
I am trying to access and modify data of a table of another schema which contains 80,000-90,000 records. My procedure is taking near about 30 mins to complete the operation. faster access and updation of table data.
Details: I have two schema: TEST and PROD I am running the below code from TEST Schema. /* CODE START HERE*/ DECLARE exc_bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT (exc_bulk_errors, -24381); v_block_count NUMBER := 1000;
[Code]....
The above code is taking near about 30mins to process.
I have also tried another approch: Creating a procedure in PROD schema to update COMPONENT_MASTER table and by calling the procedure from above code by passing component code.
/* PROCEDURE CALL FROM ABOCE CODE INTEST SCHEMA*/ PROD.PROCEDURE_TO_UPDATE(v_comp_code);
I used decode & pivot insert for this,but the result is a failure.
SQL>INSERT INTO test22 (no,name) SELECT DECODE(col1,'n',col2),DECODE(col1,'name',col2) FROM test22p;
SQL> sno sname -------- 1 null null arun
AND
SQL> INSERT ALL 2 INTO test22 VALUES(no) 3 INTO test22 VALUES(name) 4 SELECT DECODE(col1,'n',col2),DECODE(col1,'name',col2) FROM test22p; INTO test22 VALUES(name) * ERROR at line 3: ORA-00904: "NAME": invalid identifier
I need to insert data in Table A from Table B where most of the fields are identical and might some of the fields will be more in Table A.
ex: Table A: a,b,c,d,e,f Table B: a.b,c,g,h
How to insert this using user_tab_columns in cursor and if I am giving the i/P as my table names . This needs to be configurable and reusable rather i mention all the fields in my logic.
I am trying to insert a column into a variable from a trigger.
Here is the code that i have:
CREATE OR REPLACE TRIGGER BUYER_after_update AFTER UPDATE ON buyer FOR EACH ROW DECLARE v_key varchar2(10); BEGIN select ID into v_key from buyer; insert into message_log_table (table_name, message_comments) values ('Buyer', 'Buyer '||v_key||' has been updated'); end; /
When I run the above I get the following compiler error:
Since ID is defined in my BUYER table I do not understand what the error means.
Here is my create table statement:
CREATE TABLE BUYER ( ID VARCHAR(50) NOT NULL PRIMARY KEY, FNAME VARCHAR(50) NOT NULL, LNAME VARCHAR(50) NOT NULL, ADDRESS VARCHAR(50) NOT NULL, CITY VARCHAR(50) NOT NULL, STATE VARCHAR(2) NOT NULL, ZIP_CODE NUMBER(5) NOT NULL );
Now i have to insert this xml into DB , the table consist of following columns ( row number , property name , value ) Expected out put is (1,student name,Raymond) ,( 1, studentid , 1) ( 1, studentAge, 11) (1,Studentmark , 0) The challenges here is
1. how to get the tag names and populate the property name column ? 2. The number of properties for a student can be variable , How can i deal with them ?
I am exploring the differences between OBJECT & RECORD. As i am still in process of learning, I found that both are structures which basically groups elements of different datatypes or columns of different datatypes, one is used in SQL and other is used in PL/SQL. Below i am trying to insert data into an table of type object but i am unsuccessful.
CREATE OR REPLACE type sam as OBJECT ( v1 NUMBER, v2 VARCHAR2(20 CHAR) );
---Nested Table--- create or replace type t_sam as table of sam; --Inserting data---- insert into table(t_sam) values(sam(10,'Dsouza')); Error Message: [code]........
I have a form that has one text box in it, and I want this value to be inserted into a table called Staff Name, which has only one column, but im unsure as to how I would get my SQL statement to pick up this value.
I tried:
begin insert into StaffName values ('addstaffmember'); end;
Add staff member is the name of the text box. this statement compiled but when I ran the form and tried to click save it would not work.
how to read the excel data and insert into tables without using SQL loader. i tried using OLE2 package,but i am getting an non-oracle exception. even i tried using CSV format. but i couldn't make it.
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
we are busy updating one databasee from a windows platform 2003 oracle 10G to a linux and oracle 11r2
We exported/imported the data and it looks ok Explain plans look the same . but our heavy batches are twice slower than on the windows box ,the two top events are disk related, sequential and scattered reads there are 90% of the time of the batch job , i read some white paper and found that using ASM can be bad in some cases the same with the linux for this particular kind of scattered reads , i was just wondering if just changing the SGA to 10GB instead of 4GB to get more cache and speedup the things .
I am running one simple delete statement in one table with rownum<10000 but it is taking nearly 10 to 15 mins.Table doesn't have any child table rows and triggers.
We have a MV which fetches data from around 27 tables containing 26 joins out of which 25 are outer joins. Some tables in the query are being referred multiple times through different alias names and hence the actual no of physical tables used is 18. This MV takes about 50 mins to refresh through complete refresh mechanism. We decided to make it fast refresh and thus made these configurations:
- Created MV logs based on rowid for each of the base tables. - Recreated MV using FAST refresh,with primary key option enabled - Pulled rowid for all these tables in the select column statement.
Even after making all the recommendations suggested by Oracle for fast refresh MV's we are still getting refresh time of around 65 mins(refresh time increased!!!).We already have indexes built on all the join columns of the base tables. What else do we need to do to make this a "fast" refresh MV ?
My ERP Application is responding fast while running reports or saving entries, if Oracle 10g Express Edition (XE) is installed. But in Oracle 10g Enterprise Edition or Standard Editions the same application is running very slow.
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count FROM sys.dba_audit_session WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')
The product I work on requires a query to tell us what tables are dependent on certain types.
SELECT dba_tab_cols.owner, dba_tab_cols.table_name, dba_tab_cols.data_type_owner, dba_tab_cols.data_type FROM dba_tab_cols JOIN dba_types ON dba_types.owner = dba_tab_cols.data_type_owner AND dba_types.type_name = dba_tab_cols.data_type WHERE (dba_types.owner IN ('SCHEMA1', 'SCHEMA2'......))
I find this query to be pretty slow. I think it is because data_type_owner in dba_tab_cols is not indexed. Adding an index is not an option because users expect our product to read-only.
I just trying to import some informations from excel to Oracle using OLE2 over Oracle Forms 6i, but It´s very slow when I have import under then 10k lines. anything to optimize that ? Follow the code used...
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
I have an Oracle database (9.2.0.7) installed on a HP-UX server.When trying to access this database from another HP-UX or Linux server, connection is fine. But when trying to connect from a Windows based client, connection is very slow (almost 1 minute to return the result of a 'select count(*)' like query, which is immediate from the Linux client).
Here are some facts I can add :
- Clients and servers are on the same network segment (it is not a network matter)
- No matter which client version I use, there no difference
- I tried to know what happens on the Oracle server when performing my sample query using tusc command : the result is that the server is performing exactly the same actions when sending my query from a Linux client or a Windows client
- The only relevant difference seems to be the client OS
I have a query which takes 5 minutes when run through the java app which uses hibernate. I've cut and pasted the SQL directly from hiberate trace file and run it in sqlplus/sqldeveloper and it runs instantly (0.01 seconds)(uses the index all ok and explain plan looks good - see below.) I don't know how to get the explain plan when it's running through the app or why it should be any different anyway as the query is identical.
My query is as follows:
SELECT /*+ INDEX (SPD SPD_SEQ_CODE) */ SPD.* FROM SEQ_ADDR_DATA SPD, SEQ_ADDR_LEVELS SPL WHERE SPD.SPVR_ID = '10' AND SPL.SPLE_ID = SPD.SPLE_ID AND SPL.SPLE_LEVEL <= '2' AND SPDA_ID NOT IN [code]....