Server Administration :: High Water Mark Down
Nov 11, 2011I have deleted lot of records in a table.Would oracle be able to insert in the empty blocks generated from deletion of records without bringing the high water mark down.
View 7 RepliesI have deleted lot of records in a table.Would oracle be able to insert in the empty blocks generated from deletion of records without bringing the high water mark down.
View 7 RepliesI have one tablespace called U01. This tablepspace contains 31 data files. Due to high water mark I was unable to most datafiles. Since my database running onair application they will not provide me downtime to move the tables. Is there anyway to fix the high water mark without getting downtime window? almost 700+g space unused. I need to reuse them asap because running out of space with in asm diskgroup.
SQL> SELECT A.TABLESPACE_NAME,round(SUM(A.TOTS)/1024/1024) "Tot size MB",
2 round(SUM(A.SUMB)/1024/1024) "Tot Free MB",
3 round(SUM(A.SUMB)*100/SUM(A.TOTS)) "%FREE",
100-round(SUM(A.SUMB)*100/SUM(A.TOTS)) "%USED",
round(SUM(A.LARGEST)/1024/1024) MAX_FREE,SUM(A.CHUNKS) CHUNKS_FREE
4 5 6 FROM (
7 SELECT TABLESPACE_NAME,0 TOTS,SUM(BYTES) SUMB,
[code]...
I am trying to discern the difference between Shrink and Move and their impact on the High Water Mark of a table.
My understanding is that MOVE in effect rewrites every row of a table ( hence why it can deal with row chaining ) whereas SHRINK basically moves existing rows in a table 'down' the table into any available free space. This is why MOVE takes a table lock whereas SHRINK takes a row lock. What I am trying to discern is - does MOVE and SHRINK effect the high water mark and does both reallocate space and give it back to free space for the tablespace ? I believe MOVE does reduce the HWM and give freed space back to the tablespace. I am not so sure about SHRINK.
how to do water mark in oracle report 2.1
View 8 Replies View Relatedwhen I was analyzing high CPU utilization issue, I saw that the most of the top PID's were INACTIVE in database. But it was utilizing more than 4% CPU. how it is utilizing CPU without doing any work in database?
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
24013 oracle 2167M 1581M sleep 53 0 0:01:53 20% oracle/1
13260 oracle 2167M 1580M sleep 30 0 0:01:16 14% oracle/1
24399 oracle 2167M 1582M sleep 59 0 0:03:01 4.2% oracle/1
11509 oracle 2178M 1593M sleep 59 0 0:02:10 2.8% oracle/11
[code].....
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013
Thread 1 advanced to log sequence 448546 (LGWR switch)
Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579
Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189
Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
This morning i feel my db server very slow to response. When i check, i saw db server got high load but low process.
View 5 Replies View RelatedMy problem is only on Some Saturday's my Oracle server's Load average goes high (more than 300 ).
-- oracle version 11.2.0.1.0
-- runs on Sun solaris 10
-- Sun fire V 440
-- Sun storEdge 3315 connected to the server.
Same setup is working find with higher volumes without any problem but only on saturday's, that too not on all saturdays, some specific saturdays the load average goes high.
At the time nothing will be processed from the application side and the cpu utilisation goes high upto 95 %.
I am sending necessary information as follows.
vmstat 6 10
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s3 s4 in sy cs us sy id
3 0 0 28660024 6513408 285 212 2044 2 2 0 0 0 17 0 42 830 3469 1380 22 5 74
125 0 0 29027480 6156256 2 6 34 0 0 0 0 0 5 0 10 791 53770 30278 83 17 0
125 0 0 29027680 6157496 29 140 21 1 1 0 0 0 5 0 9 786 52756 30309 83 17 0
116 0 0 29031600 6159896 24 125 0 0 0 0 0 7 3 0 5 819 54081 31069 83 17 0
[code]....
I have 15 million of records as csv, want to load through sqlloader Is sqlloader is the right option to load high volume of data? I have loaded with 2.5 lac records which has taken 4 mins to load.
View 9 Replies View Relatedwhy CPU utilization is high(99%) on one server in 2 node RAC at every time. On the other server it is very low.
View 9 Replies View RelatedI have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.
View 2 Replies View RelatedI want to group the following records and mark them in a sequence order.
drop table test;
CREATE TABLE TEST
(
key0 NUMBER,
key1 VARCHAR2(1),
key2 NUMBER)
;
INSERT INTO TEST VALUES (4,'A',1);
INSERT INTO TEST VALUES (4,'A',2);
[code]......
key0 key1 key2
4A1
4A2
4A3
4A4
3A1
3A2
3A3
1A1
1A1
Now I want the records in the following way, where group_no would be in sequence when key2 starts with 1 again.
key0 key1 key2 group_no
4A1 1
4A2 1
4A3 1
4A4 1
3A1 2
3A2 2
3A3 2
1A1 3
1A1 4
Can I achieve this using SQL only and not by PL/SQL.
I have a Database of Size approx 150GB. I want to create a standby server(High Availability).
1)If our Primary Database crash, the standy server should instantly start automatically i.e no down time.
2)There should be no data loss.
3)Are Data Guard and Replication similar?
I have a result set with the following structure
Column 1 Column 2 Column 3 Column 4
--------- --------- --------- ---------
A1 B1 10 50
A1 B1 20 50
A1 B2 30 40
But i want to restructure this result like below,
Column 1 Column 2 Column 3 Column 4
--------- --------- --------- ---------
A1 B1 10 50
20
B2 30 40
I am just trying to change the repeating values on a certain combination as blank.
I have a table with zip codes and their plus four values. For ex: zip code of 10000, which has corresponding plus four values of 001, 002, 003, and 008, 009, 010. The issue is just that--a zip code can have sequential plus four values, and then it will skip several potential plus four values, and then start again. I would like to assign a low plus 4 value and high plus four value to a zip code, keeping in mind that the plus four values are not always sequential. So, it would be similar to this:
zip plus4 low plus4 high
10000 001 003
10000 008 010
My requirement is while sending a data file from oracle to mainframe, first 3 bytes for the header row should contain low values and trailer should contain high value.
How to pass oracle values to mainframe high and low values ?
I have a priority column(possible values are 1 or 0) in a table where
i need to get 70% of high(1) and 30% of low (0) and max i can fetch for select is 50 records.
Eg1: Total if i have 60 in which 20 high and 40 low then 70% of 20 = 14 and remaining should be taken from Low i.e. 36 from low. so total will be 50 transactions.
Eg2: Total if i have 60 in which 40 high and 20 low then 70% of 40 28 + remaining should be taken from Low i.e. 22 from Low.
Eg 3: If i don't have any high then total should be picked from low vise versa.
I have below query but it is having problem when there is no low priority.
SELECT ID,PRI FROM temp tbl WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1
UNION ALL SELECT * FROM temp WHERE pri = '0'
AND ROWNUM < 50-(SELECT COUNT(*) FROM temp WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1)
I have setup of two node (prod-db1, prod-db2) clustered database 11gR2 on windows 2008 R2 server. Everything is working fine at this setup.
My question is: Is there a way to make the Enterprise manager Database control run and be available at both the nodes independently. What I see that even at node 2 (which is prod-db2) the EM-DBControl is (https://prod-db1:1158/em) - which means the agent is running at node 1 (prod-db1) only.
My question is that how to make the EM-DBControl also run separately at prod-db2. My idea is to make the high availability of EM-DBControl (in case Prod-db1 machine is down).
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
clould you sand me sqary for find high cpu user in oarcle 10g
View 3 Replies View RelatedI've plan to use "CPU Time Per User Call" metrics.
The thresholds are:
Warning: 8000
Critical: 10000
But this alarm raise every minute.
I think it's too low.But which is the correct value to identify performance problems?
I am trying to compare the ranges of low pair and high pair,if they are within the range then source_conn_id should remain same,else it should be updated to null,which i had written it in else block.How can I implement the IF block and what to write in that block so that source_conn_id can remain the same.
SQL> CREATE OR REPLACE PROCEDURE fp_complements_src(p_id varchar2,ftr_con_id varchar2)
2 AS
3 BEGIN
4 FOR i IN(SELECT SOURCE_CONN_ID,LOW_PAIR,HIGH_PAIR FROM COMP_TEMP1 WHERE SOURCE_CONN_ID=ftr_con_id)
[Code]....
We had a massive jump in cluster waits specifically; gc buffer busy acquire during an RMAN backup. We identified the cause of the waits to a few hot blocks with a table that may well need re-building in terms of ITLs and PCTFREE (Although I thorght ASSM would manage PCTFREE and PCTUSED..)
What happens during an RMAN backup that may cause huge cluster waits on hot tables? Is there some crazy redo issue going on or maybe flashback?
Either way the spike is there and we can pinpoint the activity on the database but just dont understand why RMAN would cause the issue.We have just found that the LARGE_POOL has not been set, in fact there is no SGA_TARGET either! Could this have an effect on RMAN and cluster waits?
I am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
We have a large table of equally sized data blobs in our Oracle system and we'd like to select the whole table once into the memory. The corresponding tablespace is stored in SSD fast disks and is managed by ASM. However the achievable select speed (reading data into memory) of Oracle is not satisfactory. When we store the data in SSD disk using custom methods (e.g. in SQLite DB files) and load then into memory by multithreading (8 thread) techniques, the speed is more than 15 times that of Oracle.
Is there any way to optimize the oracle and ASM for increased full table select in our case. We tried FULL TABLE SCAN and PARALLEL hints and DB_FILE_MULTIBLOCK_READ_COUNT too, but with no success.
FYI:Our data blob is about 4.2kB and each DB_BLOCK_SIZE is 8kB. The ASM segment is configured for AUTO-ALLOCATE. The table is partitioned by HASH. Our oracle system is not RAC.
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
how to reduce the cpu cost for a query at query level.
View 10 Replies View RelatedI am using Oracle 11g On windows 2003 Server, and in the task manager, tnslsnr.exe comsuming approx 1 GB of RAM. After restarting the listener it came to some mb(20-30). but Again after few days it raies and consume lots of memory..
View 11 Replies View Relatedin my old database 9.2 (on AIX) i see high paging space usage for background processes.
#> svmon -Pg -t 1 |grep Pid ; svmon -Pg -t 10 |grep "N"
Pid Command Inuse Pin Pgsp Virtual 64-bit Mthrd 16MB
2285578 oracle 304609 81552 500909 605395 Y N N
3350676 oracle 304588 81552 500643 605149 Y N N
1794254 oracle 304592 81552 500634 605126 Y N N
[code]....