online backup done thru RMAN.Suppose i am taking online backup of full database. During the backup, user's are inserting/deleting/modifying data. This data is getting stored as online archives. Once the database backup is finished, how these archives are applied to the database to make the database up to date.
I am new to DBA activities.I am working on 11.2.0.1.0 version in Oracle.
My database keeps on growing so as a security measure, I am asked to take backup of the database periodically without shutting down this database. Since this task has to be done periodically, I have to schedule the job for this task.? I know how to take export of the database.. but not sure about the periodical backups... Now sure how to proceed on the activity,
My database keeps on growing so as a security measure, I am asked to take backup of the databse periodically without shuttingdown this databse. Since this task has to be done periodically, I have to schedule the job for this task.? I know how to take export of the database.. but not sure about the periodical backups..
I took backup of Datafiles , archivelog files and controlfile through RMAN. Now, i want 3 backup file will created as a output of RMAN operation. It would be :
1 backupfile for all the datafiles 1 backupfile for all the archivelog files 1 backupfile for controlfile.
It have to be regardlessly whatevr my DB size is. It it possible to do so. How i can can write rman script to accomplish the task.
We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .
When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.
We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.
The same operation in windows is taking very less time when compared to AIX.
update tab1 set col1 = ( select col2 from tab2 where tab1.id = tab2.id) table 1 has arnd 10,000 rows
table 2 has arnd 1,700,000 rows and has a primary key on column id.This query is taking around 20 secs to execute. I checked the x-plan and most of time taken for table access by index rowid.I checked the stats for the tab2, its just three days old.
I want to use a function in join clause. so i go for pipelined function(using for loop to get record & 1 more loop to fetch in table type variable). i achieved what i required. but problem is it takes much time to fetch data. is there any other approach which returns table records without pipelined function.
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
Plan hash value: 2798378986 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------
I am not able to take backup of table which is partitioned.Whenever i start the backup it was hanging with wait event "asynch descriptor resize", so I tried to create another table using original table ( like create table BASE_TABLE AS SELECT * FROM BASE_TABLE), but it is also hanging with same wait event.
I have a table called transaction_dw and I need to select all records that have an account balance that has been below 0 in the past 6 months. initial query I tried was:
select account_balance, timestamp from transaction_dw where account_balance < 0 and timestamp between sysdate and sysdate - 6;
but this is only taking 6 days off the sysdate rather than months, how I can get it to take off 6 months?
I have 2 seperate rman backup one backup is of only datafile ,spfile and controlfile which i am able to restore and recover without any problem say bkp1 taken at 10 am
other set of backup is of only archive log files of same day but later time than datafile backup say bkp2 taken at 8pm...if i restore and recover bkp1 and try to restore bkp2 it gives error datafile exist (possibly due to fact both backup have control file)
if i just restore bkp1 and try to restore bkp2 so that i can do one recover at time it gives error datafile permission issue (possibly due to fact both backup have control file)
I want to restore database upto 8pm time how can i use both bkp1 and bkp2 to do it(restore datafiles and apply all archive logs on it)
In one of my application oracle net connection is taking 30 to 40 ms(tnsping servicename),but in actual connection is always taking more than 45 ms, and my application requirement is below than 10 to 20 ms.
And can i use any other connection method like hostname, ezconnect etc.
I am working with a new client and am still waiting on access to systems so I'm a bit hampered in looking at details of the database and environment. The database is OLTP and typical response time for returning result sets is under 2 seconds, often less than 1 second.
A simple SELECT on a few columns of v$session_longops (no subqueries, group by, having, etc) ran for more than 5 minutes.
A external reporting application ( SQL SERVER REPORT SERVICES) sends in some comma separated parameter values, which has to be queried against a table with 6-10 million records.
Length of the comma separated value can go upto 5000-6000 in length based on user input in the front end application. This application sends this value in comma separated. i.e., like
'AA1-11101,AA2-34346,AA4-534399,.....' like this at a time the application can send upto 500 values each of length 10. i.e, maximum length can be upto 5000. I used CLOB to handle this because since the length can be 5000 and varchar2 can handle only 4000 long literals.
But the time taken by the CLOB to verify against the table using INSTR is more compared to VARCHAR2.
I found that VARCHAR2 does'nt take much time. Is it a good idea to have VARCHAR2 in the PLSQL procedure as parameter instead of CLOB, since PLSQL VARCHAR2 can handle upto 32000 long values.
I want to know how I can find which query is taking more time , for example some query's are run from unix, java and from toad,sqlplus. and one query is taking much more time to execute, so how i can get that query and all the details.
I have a query which is executing fast in dev env,but very long time in qa env.What is the criteria when this behaviour occurs.Though qa is having more data than dev.But still it is taking long time for 1 rows also.When I am using the query rownum<=1.So What to check for this.
I have a session on a system here that has been stuck on a DELETE statement for a very long time and the session is pegging the CPU. Using TOAD here is the "current statement":
DELETE FROM WORKFLOW WHERE ID = :B1
ID is the primary key of the table.Here are some relevant stats, also from TOAD's session browser:
Elapsed time = 35507986900 CPU time = 35531815481 Buffer gets = 972040769 Disk reads = 951289273 Executions = 71462
I'm not sure I understand "executions" because from the information I have from the people who initiated this, this particular delete should only be occurring 30 times... maybe that stat means something other than what I think it does.I also ran a trace for 30 seconds using:
TKPROF: Release 11.2.0.1.0 - Development on Wed Feb 27 04:04:50 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. [root@localhost trace]#
The resulting file is attached. Now the offending query in the tkprof, based on my interpretation, is the select from the CMTE00 table, which contains 2344482 rows and no index on the workflow_id column. The relationship between CMTE00 and WORKFLOW tables are 1 to 1. There is a foreign key on CMTE00 pointing to the primary key of WORKFLOW which is what I assume initiated this query - I assume this is oracle checking the referential integrity since our code is not executing that statement. Also of interest, prior to this delete statement, the corresponding entry in CMTE00 was deleted in the same transaction. Google searching "db scattered file read" lead me to one of your (Don - if you read this) articles and appears to indicate that individual blocks are being fetched off the disk and this is what is taking up all the time.