Sysaux Tablespace is running low. WE SET AWR RETENTION TIME=60 DAYS. WE ARE NOT INTEREST TO EXTEND SYSAUX TABLESPACE SIZE. Usually we take AWR weekly once. Some times we did ADDM report and ASH.
CODEsql>select TABLESPACE_NAME, FILE_NAME, BYTES/(1024*1024), AUTOEXTENSIBLE, MAXBYTES/(1024*1024) from dba_data_files where tablespace_name = 'SYSAUX';
1. What's the best SOLUTION ? 2. Can i shrink sysaux tablespace ? 3. I think , The size for all occupants in sysaux tablespace is less than 200 MB => how to find actual content of sysaux tablespace ? 4. What could be the reason for growth? Is there any way to free the space from sysaux table space?
I got the error ORA-01653: unable to extend table <OWNER.TABLE_NAME> by <BYTE> in tablespace <TABLESPACE> in my production database. But I could see 4GB of free space available in the tablespace. But this was resolved after increasing the Tablespace size by 1 more GB. So I wanted to know how I can I reclaim the 4GB space ?
While searching in internet I got few tips like there will be a fragmentation in the tablespace, the free space available in the tablespace may not be a continuous block. To avoid this we need to reorganize the tables using ALTER TABLE <TABLE_NAME> MOVE <TABLESPACE>; command.
But in my tablespace there are huge number of tables exists I cannot do reorganization of all the tables. So I need to know how to identify particularly what are the tables has more fragmentation? so that I can go for reorganizing those tables only.
My Database version in 9.2.0.7 Tablespaces are Locally Managed
I have a tablespace of size 512 GB. On the basis of tables and indexes created on it, the space consumed should be 319GB but when I am retrieving the free space size , I am getting only 124GB of free space. That means around 70GB of space is missing.
We are using a GTT table to store the summarize data and display it on same screen(10g 10.2.0.5). Now we are facing temporary tablespace space issue very frequently as our client do not enable the auto extend on for temp table space. We have analyzed the AWR and came to know that there are 900000 inserts per hour on an average. Client DBA Claims that there are sessions(1or 2) which inserts the data in temporary table continuously for 2-3 days.
According to him one session is running from 28th Aug and problem comes on 2nd Sep and after killing the problematic session the application will work fine. Generally this problems come on weekend. I have discussed with our dev team and as per them there is no session leakage issue.
following is the insert statement: INSERT INTO DT_CA_STNDALN_DETAILS_TMP (ORG_ID,BA_PRODTYPE_ID,MAX TENOR,GROSSLIMIT,GROSS_UTILISATION,HAS_MDR,HAS_CLUSTER,SIGN_IN D,GROSS_AVAILABILITY,COLLATERAL,NET_UTILISATION,DT_CA_STNDALN_DE TAILS_TMP_VER,DM_LSTUPDDT,NET_AVAILABILITY) VALUES (:1,:2,:3,:4, :5,:6,:7,:8,:9,:10,:11,:12,sysdate,:13) ;
I need to check the exact amount of space used (in bytes or MB) by a table which is having a BLOB column.I tried the following query but it is not giving the proper usage.
select segment_name , sum(bytes)from dba_extentswhere segment_type='TABLE'and segment_name in ('TEST_CLOB','TEST_BLOB','TEST_CLOB_ADV','TEST_BLOB_ADV') group by segment_name; I even tried the following stored procedure create or replace procedure sp_get_table_size (p_table_name varchar2)as l_segment_name varchar2(30); l_segment_size_blocks number; l_segment_size_bytes number; l_used_blocks number; l_used_bytes number; l_expired_blocks number; l_expired_bytes number; l_unexpired_blocks number; l_unexpired_bytes number; begin select [code].......
But it is giving the error
Error starting at line 298 in command:exec sp_get_table_size ('TEST_CLOB_ADV')Error report:ORA-03213: Invalid Lob Segment Name for DBMS_SPACE packageORA-06512: at "SYS.DBMS_SPACE", line 210ORA-06512: at "SYS.SP_GET_TABLE_SIZE", line 20ORA-06512: at line 103213. 00000 - "Invalid Lob Segment Name for DBMS_SPACE package"*Cause: The Lob Segment specified in the DBMS_SPACE operation does not exist.
*Action: Fix the Segment Specification Although the LOB section is specified in create table syntax.
I've heard that this statement causes a table lock but cant find any information on this.if it is so, is it a write lock or also a read lock of the table?
I am using Oracle 10.2.0.3. Since yesterday i am seeing a session with sid 1160 using undo tablespace but not able to find how much it is using .I need to know which session and from which module and how much is the Undo being used by those sessions. I have tried searching but all the queries provide me with some different results each time.
Also i need the same information for REDO being generated .
while trying to refresh an materialized view.. oracle throws cannot extend temp table space error.. while starting to refresh mivew temp table space is empty but once refresh started temp tablespace is growing and throws cannot extend temp tablespace error,,,size of temp tablesapce is 200GB..when i monter the session it does an sort event of an table(ammt_pol_ag_comm).. only 4% of this sort event is completing after that it throws error bu occupying the entire 200 GB tabespace.. MView script below..
CREATE materialized VIEW ammv_agent_pol_persis_emas NoLogging Parallel 10 Build Immediate Refresh on demand With Primary Key AS
How to check for the increment of a space of the tablespace based on the particular table. (i.e.) Say a scenario, if am trying to load the data for a particular table, for first I loaded some 10000 records and then again loading 50000 records ,so based on the icrement of the reocrds the tablespace size also increases gradually . so for this scenario how to monitor the increment of the space.
It is not possible to run SHRINK SPACE against a table with a function based index. This is documented, and I've tested on the current release. I've reverse engineered it a bit, and the issue is in fact that you cannot SHRINK SPACE if there is an index on a virtual column:SQL> SQL> create table t1(c1 number, c2 as (c1*2)) segment creation immediate;
Table created. SQL> alter table t1 enable row movement; Table altered. SQL> alter table t1 shrink space; Table altered. SQL> create index i1 on t1(c2); Index created.
SQL> alter table t1 shrink space; alter table t1 shrink space
ERROR at line 1: ORA-10631: SHRINK clause should not be specified for this object.
primary key constraint on transaction_dtl_bk is affecting the insertion of next correct rows.
CREATE OR REPLACE PROCEDURE NP_DB.san_po_nt_wnpg_1 ( dt DATE ) IS v_sql_error VARCHAR2 (100); -- added by sanjiv v_sqlcode VARCHAR2 (100); ---- added by sanjiv added by sanjiv
Oracle 11gI have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company. I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information. select records from this 125 million record table to insert into the smaller table.
I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches. My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market asWITH got_pairs AS ( SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no , ROW_NUMBER () OVER ( PARTITION BY l.address_key ORDER BY l.hh_verification_date DESC ) AS r_num FROM t3_universe e JOIN t3_universe l ON l.address_key = e.address_key AND l.zip_code = e.zip_code AND l.p1_gender != e.p1_gender
My scenario is I need to insert into History table when a record is been updated into a tabular form(insert the updated record along with the additional columns Action_by,Action_type(Like Update or delete) and Action Date Into History table i.e History table contains all the records as the main table which is been visible in tabular form along with these additional columns ...Action_by,action_type and action_date.
So now i dont want to create a befor/after update trigger on base table rather i would like to create a generic procedure which will insert the updated record into history table taking the page alias and pade ID as the parameters(GENERIC procedure is nothing but whcih applies to all the tabular forms(Tables) contained int he application ).
I need to take a snapshot of a table before insert or update happens to that table.... in oracle 10g. I am reading the MV docs from oracle and below link..
[URL].......
how MV should be written for this and how to schedule it in dbms_jobs for auto refresh?
assuming that t1 is the table where DML operation are goin to happen so before any insert or update, snapshot has to be taken, and I am assuming that to do this it would look something like this?
create materialized view my_view refresh fast as select * from t1;
Step 1.Insert all records from the external table into the export table. Truncate the export table first
Step 2.Read in a record from the export map table
Step 3.Search through export table records looking for the key words BRANCH =. Compare the branch code with the branch code form the map table
Step 4.If a match is found mark all records in the export table for the worksheet with the global ID from the export map table as follows..The first line of a worksheet is marked by the words WKSHTS..The last line of the work sheet is marked by the words COMPANY CONFIDENTIAL..We will need to capture the line break so also mark the next line after the COMPANY CONFIDENTIAL line
Step 5.Continue with Steps 2 - 4 until all records have been processed from the export map table.
first I have to create a procedure ti insert data from external table to export table.Global id will be blank.it will be updated by the mapping table's Global Id when The EB COLUMN's data(i.e 8p,2Betc ) will match with the BRANC=NA,2Betc of the datasheet loaded from the external table.. FOLLOWING IS THE SAMPLE DATASHEET
WKSHTS AAAAA BBBBBBBBBBB ELECTRONICS INC. TIME REPORT-DATE PAGE SORT - BR, SLSREP AEC FIELD SALES REPRESENTATIVE 16:14 09/21/12 1 BRANCH = 2B EMPLOYEE NAME SALVAAG, GREGG Days in the Month 28 [code]....
THERE ARE 2 pages..I have to split this LONG REPORT STORED IN WKSHT_LINE COLUMN OF EXPORT TABLE to 2 records..like wise 500 pages are there means 500 records.. AND THEN FIND BRANCH= after that which two words will come i.e NA,2B etc if it will MATCH WITH MAPPING TABLE"S EB COLUMN"S DATA,THEN MAPPING TABLE's GLOBAL ID WILL BE UPDATED TO EXPORT TABLE's GLOBAL ID WHICH IS BLANK
Trying to auto insert the newest records from one table into another Table. I have a vendor provided table that is part of my database (running Oracle 9i) so I can't change the underlying structure to it or their process stops fluxing. However, I can add a trigger to it. What I want to do is this:
When the vendor's software inserts a new row (through their own automated process) I want to insert data from that same new record into another table of my own. (where of course I can re-format it, etc., and make the data my own)
The original vendor table does not have a insertion timestamp field to work off of.What is the best way to trigger an insert off the latest inserted record? It works to replace all the records in the entire vendor table but I only want to insert one record at a time.
I have table t1 and t1 , I want a procedure that will insert all records from t1 into table t2 and after successfull insert table t1 should be truncated .
If their is any problem in insert in to table t2 , the truncate command should not work .
Truncate command should work only after successfully insert command .
how to adjust a total (counter) after a record is inserted into a table.
the dilemma i am facing is we are using third party software for our fundraising operations so I have no control over what gets done in the background as users process their daily batches into the system. below is the scenario:
during batch posting records are inserted into the paytable, on some pledge donations donors will send overpayments when fulfilling a PLEDGE(as is the case with donor 16084) therefore the system will split the payment during the process and will assign a trantype of 'PP' to the exact pledge amount and a 'PPO'(pledge payment overage) towards the balance. additionally as records get inserted into paytable there is counter of those paytable records going into the appealtable for that particular appealcode so in the case above when batchno 20120808 is completed appealtable.total# will show 103 and total$ will show $2532($10,$12,$10,,,I did not include payment$ since that is not the focus of this issue and will not change).
mgt wants the counter into the appealtable to be 2 instead of 3 records since the two records that were split(same split_transnum) should be recorded as one response not two.
I have tried writing an after insert trigger(dreaded mutating table error) and can't seem to figure out how to update the counter to the appealtable after records are inserted into paytable. below is some code I've been working with but it's not working.
CREATE OR REPLACE TRIGGER PPO_Payment AFTER INSERT ON paytable FOR EACH ROW