I am trying to delete the one month data from the table , which contains the end customer sales data.The total data count in the table is 30 crores. And the data of one month is nearly 10 Crores.I am using oracle 10g , The date field to be used in the condition of delete is indexed.
i have a problem to view tens of millions of data in oracle 11g i have table Book_Issue that have tens of millions of data but when i execute query to see all the data select * from Book_Issue
it only view 5.000 data
what query should i execute to view million data in oracle 11g
I have been using MSaccess to do some updating and deleting of data.I have 2 tables, 1 with data, 1 with the criteria to change, 2 colums (FINDWORD and REPLACEWORD).
The main table has lots of data, and the SQL query im using in access looks like this:
UPDATE TABLE1, tblFindReplaceWords SET CleanList.COLUMN-NAME = Replace([TABLE1].[COLUMN-NAME],[tblFindReplaceWords].[FINDWORD],[tblFindReplaceWords].[REPLACEWORD]);
how would I go about doing the same thing in orcale?
I've a primary database and a physical standby.Logs are shipping perfectly from primary to standby.The logs that are applied on the standby are getting stored in a mount point(LINUX-/opt2) which consumes more space(160 GB). Will that be right if I go ahead and delete them?
My rman configurations in primary is:
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 4; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON;
i have a tablespace with a datafile of 20g. now by mistake i delete the datafile and then try to delete the tablespace from EM but i got an error which says that data file is not present to delete
Now initially after deleting the file physically so then i check space by applying df -ah at os lvl so it didn't reclaim the space now i try to delete the tablespace from em so it gives me the above error. This might be due to tablespace existence. so how can i reclaim the space.
I have a table which has plenty of rows. In production, I would estimate it to be from 30 millions to 300 millions. I need to update on column (flag) in all the rows (created before certain date).Now saying just:
UPDATE MyTable SET flag = 3 WHERE created < to_date('2010-10-08 23:59:59', 'YY-MM-DD HH24:MI:SS'); COMMIT;
Does not seem like a good idea - the commit-buffer would become too big.I will write a PL/SQL script for this. The question is, whether I should:
a) Update each row separately, and commit after every 10000 rows. ( WHERE RowId = [rowId] ) b) Update 10000 rows with set of dates ( WHERE rowId > [some_row_id] AND RowId < [some_row_id_2]
In the latter example the some_row_ids would naturally be fetched. The rowIds come from sequence. So which one would be more effective?I am not too familiar with PL/SQL or Oracle for that matter.
There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.
which is the fast way of inserting 60 millions of records from a view to a table.
method 1:
create table t_temp_table as select * from v_dump_data;
method 2:
through bulk collect
---Bulk_Collect With FORALL---------- DECLARE TYPE srvc_tab IS TABLE OF t_temp_table%ROWTYPE; l_srvc_tab srvc_tab := srvc_tab(); l_start_time NUMBER; l_end_time NUMBER; l_error_count NUMBER;
After creating all the tables and the constraints, and inputting data to the table.. i want to delete everything. i try using drop table but it doesn't get rid of the constraints.
looking at my oracle DB schema I see that there are considerable number of tables that appear to be backup(!!) of other tables and don't ask me the reason why would some one create backup/temp tables in production.
All the tables almost occupy 7GB of space and I would like to get ride of them. Before I get ride of them I would like to see when the table was last accessed for any purpose like any DDL/DML statements and any Select statements performed on the temp/backup tables as well. Is this kind of information readily available in any SYS tables or should I write a trigger to get the details going forward.
for one particular employee, he is having multiple records with the same data in the table
EMPNO ENAME JOB SAL DOB ------------------------------------------------------- 1 A X 100 1956 2 B Y 200 1974 1 A X 100 1956 3 C Z 300 1920
[Code]....
like this am having multiple times the duplicates.
I have written the below query to delete the duplicate records. But it is deleting only one record (if we have 5 duplicates it is deleting only 1 ). But I am looking to delete if we have 5 duplicates need to delete 4 duplicates and keep 1 record in the table.
query which am using to delete the duplicates is
DELETE FROM Table1 a WHERE ROWID IN (SELECT MAX(ROWID) FROM Table2 b WHERE a.ID = b.ID);
it is deleting only one row but I want to delete 4 records out of 5 and keep one record.
create or replace Procedure ReadingsPurge As v_sql varchar2(500); v_date date; p_count NUMBER;
[Code]...
-- Code below drops partitions that are older than the NoOfDays Parameter OPEN c1; LOOP FETCH c1 INTO v_partition_name, v_high_value; EXIT WHEN c1%NOTFOUND;
[Code]....
Above code is compiling successfully.
After I added the lines makred in the red font, when I tried to execute the stored procedure, I got an error
Error starting at line 1 in command: execute ReadingsPurge Error report: ORA-00933: SQL command not properly ended ORA-06512: at "CDC_USER.READINGSPURGE", line 30 ORA-06512: at line 1 00933. 00000 - "SQL command not properly ended" *Cause: *Action:
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. why it is taking too much time for deleting records in table A.
I'm testing a procedure which loads data into my database, and after each test I want to empty some of the tables and reset the sequences. I have this script to do that...
DELETE FROM COM_MERGE; COMMIT; DELETE FROM COM_TITLE; COMMIT; DELETE FROM COM_ISSUE; COMMIT; DELETE FROM COM_PAGE_ELEMENT; COMMIT; DELETE FROM COM_ELEMENT; COMMIT; DELETE FROM COM_STORY_TITLE; COMMIT; BEGIN COM_RESET_SEQUENCES; END;
Today I added the call to the sequences procedure to my script, but I have been using the script to delete from tables for a number of days without problem.However today I am finding that when I run the script it works ok the first couple of times, but when I try running it for a third time, it hangs after the second delete (in other words it stops when it gets to the delete from COM_ISSUE). After this happened the first couple of times I stopped the db and restarted it, then the script was ok twice, but again I'm finding that the script hangs. There is no error message, but the script fails to complete.
I didn't know if it was because originally I had one commit at the end of the script, so I added commits after each delete but that didn't solve it.I am using SQL Developer but I have found the same problem when running the script from SQL Plus.This is the definition of the COM_ISSUE table (just in case the table is the source of the problem).There is only one row in COM_ISSUE.
CREATE TABLE "BILL"."COM_ISSUE" ( "CI_ID" NUMBER NOT NULL ENABLE, "CI_TITLE" NUMBER NOT NULL ENABLE, "CI_DATE" NUMBER NOT NULL ENABLE, "CI_PRICE" NUMBER NOT NULL ENABLE, "CI_PUBLISHER" NUMBER NOT NULL ENABLE, CONSTRAINT "COM_ISSUE_PK" PRIMARY KEY ("CI_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 [code]....
I want to delete Master / Detail Data through cursor between date 01-02-2010 till 10-02-2010. Problem is in Detail I dont have date column in detail. But I have to delete Master and Detail record with desire date. I have made a cursor but it delete only detail record i want to delete master record too.
Master Table M_NO CHAR (12) NOT NULL, REMARKS VARCHAR2 (200), CANCEL_YN CHAR (1) NOT NULL, M_DATE DATE NOT NULL, PRIMARY KEY ( M_NO ) ) ;
Detail Structure
M_SNO NUMBER NOT NULL, ACCOUNT_CODE CHAR (19) NOT NULL, CANCEL_YN CHAR (1) NOT NULL, M_DESC VARCHAR2 (200), DB_AMT NUMBER, CR_AMT NUMBER, M_NO CHAR (12) NOT NULL, PRIMARY KEY ( M_SNO, M_NO ) ) ;
create or replace procedure test as cursor md_cur is select m_No from master where m_Date between '01-02-2010' and '10-02-2010';
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for 32-bit Windows: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
We are in the process of setting up our backup policy. After the Archived Logs have been backed up, we need to delete them after 7 days. Also the actual files on disk.
I have a table, a superclass and several derived classes.
the table definition looks like this:
/*-------------------------------------------------------------------------------------*/ /* table to store ObjectData for the current session/transaction */ /*-------------------------------------------------------------------------------------*/ create table SessionObjectRegistry( oid number, transactid varchar2(200), sessionid number, hash number,
[Code]....
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkbnftn2], [], [], [], [], [], [], [], [], [], [], []The db is Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsThe error occurs on 11.2.0.3.0 - 64bit as well.
As a workaround i'm not deleting the rows, but setting the stored objects to null. At the end of the routine i can truncate the table with the objects since truncating works. I'm not the administrator of the database, therefore i may not change any settings.
Unfortunately i wanted to be able to share Objects between Sessions, which is possible with this workaround (simply don't truncate at the end), but leaves a lot of crap lying around.