I am interested about the fast way to access all data in physical block. what is the quick way to bring data blocks using the rowid, I found this script but soon as I can have faster access:
select * from table_name t
WHERE ROWID between 'AAAUaOAAEAAHkJiAAA' and 'AAAUaOAAEAAHkJiAA8';
where 'AAAUaOAAEAAHkJiAAA' is the last element in the block and 'AAAUaOAAEAAHkJiAA8' is the first one
my question is can retrieve all the data in one block more quick than this query.
While increasing the tablespace i am getting below error. How to handle this
SQL> set lin 300 SQL> col TABLESPACE_NAME for a25 SQL> col FILE_NAME for a65 SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB 2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
TABLESPACE_NAME FILE_ID FILE_NAME AUT MB ------------------------- ---------- ----------------------------------------------------------------- --- ---------- SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
SQL> Alter database datafile 3 RESIZE 60000M; Alter database datafile 3 RESIZE 60000M * ERROR at line 1: ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 block
We are suffering from very bad application response for last few days, when i try to check and drill down, where the actual contention is? I came to know that there may be contention on data blocks, which may be a prime reason for degraded performance. Herewith i'm pasting my actual stats of gathered from v$waitstat. I gone through some of asktom docs and find that there may be a problem with freelist or segment space management. My data tablespace is segment space management = Manual.
My main question is
1) Should i need to increase freelist value (Right now my value is 1)
2) Or i have to move on segment space management = auto
SQL> select * from v$waitstat;
CLASS COUNT TIME ------------------ ---------- ---------- data block 2022 4052 sort block 0 0 save undo block 0 0 segment header 1 1 save undo header 0 0 free list 0 0 extent map 0 0 1st level bmb 0 0 2nd level bmb 0 0 3rd level bmb 0 0 bitmap block 0 0 bitmap index block 0 0 file header block 0 0 unused 0 0 system undo header 0 0 system undo block 0 0 undo header 6 0 undo block 0 0 18 rows selected.
We are having two data blocks as Earnings and Deductions. We need to export this to an excel in single sheet parallel [ imagine your payslip format ].
if we use normal text_io we are not able to get the result we want. so we have tried using a package called export2excel. we achieved what we want. The form is working perfectly in client server concept. When we move the same form to our Unix application server, it is not working.
I have a table with around 650,000,000 rows and we need to delete about 60,000,000 rows at the end every month and same amount of rows accumulate throughout the month. The deletion usually takes overnight to delete. We are using 10r2 in IBM AIX. The procedure we are using to delete is:
declare ln_count number:=0; begin for i in (select rowid from table1 where some_id<2012090000) loop delete from table1
[code]...
When this procedure is started I mostly see that the session is busy in user i/o wait for db sequencial file read. Will using cursor instead will give better results.
I have created two non base table text items(from date, to date) and if values are not entered from front end i.e. null then hadrcoded in pre_query trigger to default values.But even i enter values dynamically from front end and when ever i press fetch button ,it is taking default values i.e. it is not accepting values what i have entered.
I have 4 blocks in my form which is basically used for travel booking for the employees in a company within India
1) Header block :- contains info abt the person who is booking the tickets for number of employees. Here i have given booking no a primary key.
2) Employee Detail :- Here the basic info of an employee is entered. Here i have taken booking number as a foreign key and then given emp_cd & booking number as a composite primary key.
3) Travel Detail :- Here the travel detail of individual employee will be entered wherein a unique trv_no will b generated 4 every single travel. Again i have taken foreign key as bkng_no frm 1st blck and emp_cd frm 2nd blck and tkn a composite primary key which comprises of bk_no,emp_cd and trv_no. this is used to maintain the uniqueness for single travel.
4) Vehicle Hotel Details :- This block is placed on different canvas for same form.It is meant for Other details in which details regarding hotel,vehicle etc booking aftr reaching the destination is entered. In this block thrs no primary key, but i hv taken the composite primary key of 3rd block as a foreign key since thr will be multiple entries for this one entire travel.
At every level there will be multiple entries for each corresponding entered record. I am able to enter one single record properly i.e. for 1 emp i am able to enter multiple travel details and his other requirements but asi try to enter more then one employee info, his travel details, other requirements i face an error stating foreign key constraint violated.Parent key not found for 3 rd level block.
How can i get the desired o/p wherein all the multiple records for every single subsequent record are stored correctly taking all the constraints in to consideration.
If the data blocks in the buffer continuously get updated such that they never reach the Least used list of LRU,then when will they be written to disk?
I create a view on production server which takes almost 10 to 12 minutes when it shows data. this view contains 3 or 4 tables on which all primary and unique columns have indexes.which index will be better for fast retrieval of data .
I am inserting data using a procedure for 2012 and 2013 year which is using partitioned tables includes crore of data in a partition taking lot of time or taking months. Is there any other way by which I can insert data fast from our query.
what is the configuration dba has do for enabling the "fast-start failover concept" in the event of loss of primary database without requiring any manual steps to invoke the failover.whether the configuration are to be made in parameter file if yes what are those ?
My Data Guard 's Enviroment: Primary: 11.1.0.7 RAC 2 Nodes Standby: 11.1.0.7 Single Instance
I want to implement Fast-Start Failover at my enviroment. Reading requirements, I found that Oracle Flashback is needed (I don't use Flashback on my databases).
Best Practices recommends 60 min (1 hr) for DB_FLASHBACK_RETENTION_TARGET if I just want to use Flashback to fast-start failover.
My questions:
What 's better, setup up flash recovery area on ASM or O.S. filesystem? What is the recommended size for flash recovery area?
I am writing a SQL inside an application to retrieve data from a table for certain IDs or for All employees based on the user's input. I create a variable :SELECTION for the where clause. When I run the SQL, I receive the error of missing right parenthesis. There is no error if I hard code the IDs.
If the input is 'All' -> the variable :SELECTION is blank so if the SQL :
select name, address, ID from employee :SELECTION
will be
select name, address, ID from employee
If the input of users ID as '1234','9999' - > the :SELECTION = ' where ID in (:VARIABLEID)' :VARIABLEID = '1234','9999'
so the SQL: select name, address, ID from employee :SELECTION
should be
select name, address, ID from employee where ID in ('1234','9999')
However, I got the missing right parenthesis error. I suspect it is something with the single quote, so I tried different ways, but it is still not working.
I want to insert 1st two rows into Hierarchy table, then I would like to update Supervisor_2 to Supervisor_4. Here I don't want to use 'CONNECT BY PRIOR', as it take more time to execute (there are millions of records).SQL code for same.
1. For each of the Filename, if originating_site is not the same as receiving_site, it means that the file has been sent to receiving_site but has not been acknowledged received yet.
2. For each of the Filename, if originating_site is the same as receiving_site, it means the file has been sent and received by the receiving_site.
My task is to list out all the Filename per receiving_site that has been sent, but not received yet by the receiving_site. For example from the sample data above, I am expecting to see that siteA, fileB has not been received yet.
How can I do that? I had tried MINUS and NOT EXISTS command, but I am just not able to get the result that I want.
CREATE TABLE MAT (matrl varchar2(100), date_man date, weight number(10) );
INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat1','12-DEC-10',100); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat2','13-DEC-10',200); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat3','21-DEC-10',300); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat4','26-DEC-10',400); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat5','22-DEC-10',500); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat6','02-DEC-10',600); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat7','23-DEC-10',700); INSERT INTO MAT(Matrl,DATE_MAN,WEIGHT) VALUES ('mat8','07-DEC-10',800);
I am using LOV on some field to retrieve data, but sometimes it does not retrieve anything cause of unavailability of date its fine but i want to put N/A in that field when such condition is occurring .
select * from testing Quote: ID NAME REDATA -------------------- -------------------- -------------------- 1 ABC,DEF,GHI,LKJ 000001 2 MNC,GHI,CTF 000002
2 rows selected
select * from testing where name like ('GHI,TCF');
now my output should be as above because GHI is common in both the names.if any of the where condition string matches the Name field then I need that record to be retrieved.
,i have a table called travel detail from which i have retrieved few rows based on 1 condition by using LOV and on next few rows modification is allowed. The rows are getting modified but it is also allowing me to enter in next blank row and insert any data in that row.how can i retrieve only that data which is based on the given condition. Further blank rows should not b displayed.
I'm trying to retrieve data with multiple select statements. The query works fine for 1 account (segment6) but fails for more than account. Below is the query.
I have a detailed block with a user ID column. The user name is not available in the block. Therefore, I have created a non-database column to retrieve the user name into it.
Here is the code I've used.
PROCEDURE get_details IS
iLoop number := 1;
CURSOR c is
SELECT FULL_NAME FROM GRP_EMPLOYEE WHERE EMPLOYEE_NUMBER = :USER_ID;
I have a table name as angdata77 having attributes like asigno..i want to retrieve data from angdata77 by using both group by & order by clauses.. for total count..am using the query as
select asigno,count(*) from angdata77 group by asigno order by asigno;
Is there any other query for retrieving the data from angdata77
I have a "select ... into" in my PL/SQL and it doesn't retreive any data(I suppose just certain columns because if it doesn't retreive a row then the PL/SQL should throw an error) when there are many users accessing this table. It shouldn't be some table locking right? Because if it is, it should wait until the table is available then it will retreive data. Currently it just proceeds with the PL/SQL and selecting nothing in that query. what details should I look at to trace the cause of this problem.