SQL & PL/SQL :: Sequence Max Value Vs Database Performance
Dec 4, 2012I m little bit confuse about the MAX VALUE of SEQUENCE. By default it takes max values as 99999999999999999. Will it really effect our database ?
View 3 RepliesI m little bit confuse about the MAX VALUE of SEQUENCE. By default it takes max values as 99999999999999999. Will it really effect our database ?
View 3 RepliesTesting our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I want to restart a 10g database on ASM ? startp and shutdown sequence?I would like to shutdown everything including database, ASM and cluster and startup again
I read at various places but not sure of the sequence For database DB with database instance DB01 and DB02 having ASM instances ASM01 and ASM02 on node1 and node2, I understand following would be the sequence
1)
Login as root on node1
cd $CRS_HOME/bin
./crsctl start crs
2)
Login as root on node2
cd $CRS_HOME/bin
./crsctl start crs
3)
login as oracle on node1
$ORACLE_HOME/bin/srvctl start nodeapps -n node1
4)
login as oracle on node2
$ORACLE_HOME/bin/srvctl start nodeapps -n node2
5)
login as oracle on node1
start ASM instance on node1
$ORACLE_HOME/bin/srvctl start asm01 -n node1
6)
login as oracle on node2
start ASM instance on node2
$ORACLE_HOME/bin/srvctl start asm02 -n node2
7)
Start oracle database (from any node)
$ORACLE_HOME/bin/srvctl start database -d DB
A. confirm if the above sequence is correct?
B. This sequence if corrrect, is same for 10g and 11g?
C. Is shutdown sequence exact reverse of the above?
CREATE SEQUENCE hyd1_seq
START WITH 2100000
INCREMENT BY 1
NOCACHE
NOCYCLE;
this is how i created a sequence in database 10g.
if :SHIP_MSTR.PLACE_FROM = 'hyd1' then
SELECT hyd1_seq.NEXTVAL
INTO :SHIP_MSTR.BILL_ID
FROM dual;
this is how iam calling sequence in form.my problem is once the sequence is generated in form and even if i am not saving a form its generating next value next time.what i want to do is if i am not saving the form that sequence number should again come in bill_id.
we need to take care of all the components in a RAC Database 11g rel 2 . what is the sequence of stopping RAC database ? databsae ?? ASM ??? listeners ??? cluster services ??? commands and also sequence to start / stop ...and to check status of each components
View 2 Replies View RelatedWe have standby dataguard set up namely CRMSTDY of 3 node RAC clusterware. It has been observed that log sequence number 1 is generated since last 4 days on standby database. There is nothing more information in alert_standby.log file and the same file is visible when we checked with ASMCMD utility. Check the attached print sceen and suggest. Below mentioned system information.
DB : Release 10.2.0.4.0
OS : Linux Enterprise release 5.2 (Carthage)
An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.
The same query in a test database completes in less than 2 mins.
I would like to export the sql plan from test database to prod database.
how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.
How to use same oracle sequence name in Oracle Database schema as well as timesten schema?
View 1 Replies View RelatedWe have just migrated our database from 10g to 11g R2. We are using a PLSQL block to gather stats (analyze) which was executing successfully on 10g but giving below error on 11g:
PLSQL
BEGIN
/* Delete statistics */
DBMS_STATS.DELETE_TABLE_STATS( ownname => 'owner_name', tabname=> 'table_name');
/* Gather statistics */
DBMS_STATS.GATHER_TABLE_STATS( OWNNAME => 'owner_name', TABNAME=> 'table_name' ,METHOD_OPT=>'FOR ALL INDEXED COLUMNS', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, DEGREE => DBMS_STATS.DEFAULT_DEGREE, CASCADE => TRUE, NO_INVALIDATE => TRUE);
Error report:
ORA-12801: error signaled in parallel query server P057, instance<instance_name> (2)
ORA-12853: insufficient memory for PX buffers: current 958640K, max needed 11666304K
ORA-04031: unable to allocate 65560 bytes of shared memory ("large pool","unknown object","large pool","PX msg pool")
ORA-06512: at "SYS.DBMS_STATS", line 23828
ORA-06512: at "SYS.DBMS_STATS", line 23879
ORA-06512: at line 5
12801. 00000 - "error signaled in parallel query server %s"
*Cause: A parallel query server reached an exception condition.
*Action: Check the following error message for the cause, and consult your error manual for the appropriate action.
*Comment: This error can be turned off with event 10397, in which case the server's actual error is signaled instead.
This block is working successfully when the degree was hardcoded as 8 in 11g. This block is giving error for only 2 partition tables which has 2 million records. Only difference in both the database parameter is CPU COUNT.
10g CPU count = 16 integer
11g CPU count =48 integer (RAC)
I used v$locked_object and v$lock query to get the output.. But still I'm an one year exp in ORACLE. How to analyze the output of lock queries. what are the parameters to be analyzed on AWR report.
How to do proper performance checkup in ORACLE database as well analyze it.
we had an issue with our 10.2.0.4.0 two node RAC Database where when running stats collection caused the database performance to go down and in AWR I noticed the following:
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
gc buffer busy 95,077 73,786 776 76.0 Cluster
cursor: pin S wait on X 3,524,808 54,467 15 56.1 Concurrency
library cache lock 30,223 13,660 452 14.1 Concurrency
gc cr request 2,876 3,594 1,250 3.7 Cluster
library cache pin 1,740 1,800 1,035 1.9 Concurrency
when looked under Library Cache Activity section in the AWR report, I see the following:
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invali- dations
BODY 59 3.39 3,056 0.13 1 0
CLUSTER 70 0.00 73 0.00 0 0
INDEX 178 0.00 925 0.00 0 0
SQL AREA 423,064 50.25 2,465,209 2.37 182 119,671
TABLE/PROCEDURE 1,848 21.27 22,638 0.62 70 0
TRIGGER 14 0.00 1,801 0.06 1 0
does this mean, we are having memory crunch for this instance on node 1? we have sga_target=26gb and sga_max_size=32gb set for this instance.
insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"
Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.
We've been battling with very slow performance for some time. Herewith a detail description of the problem:
Solaris-11/ZFS/Oracle problem
We purchased Oracle T4-2 servers, and are experiencing some weird performance problems.
Hardware:
T4-2
2 x 600GB HDD per server
128GB memory per server
2 x dual port QLE2562 FO HBAs
IBM V7000 StorWyse data array
2 x CISCO MDS 9148 fibre optic fabric switches
Software:
Solaris 11.1
MPXIO
Solaris 10 branded local zones
ORACLE 10g Enterprise edition
Project for the oracle user:
user.oracle:100::oracle::process.max-file-descriptor=(basic,8192,deny);process.max-stack-size=(priv,32768,deny);project.max-shm-memory=(priv,21474836480,deny)
We received the first server and wanted to migrate our APPB application system and Oracle 10g Standard Edition database from our SUN T5240 to the T4-2.
T4-2 setup � Disk 0:
Global zone: Solaris 11.1 ( ZFS - whole disk for the root pool)
Local zones: On the Solaris 11 environment we built two branded Solaris 10 zones using an Oracle template provided on the Oracle website - solaris-10u11-sparc.bin.
Our complete database resides on the IBM data array, in UFS LUNs.The UFS LUNs were mounted onto ZFS mount-points in the root partition (/), and then LOFS mounted into the zones.
We started with the Solaris 11.1 environment.
1)After a day or two the performance of the database starts deteriorating rapidly. We then stop the database and reboot the machine. After the reboot the performance level is restored.
2)Another huge deterioration in performance happens when we unmount the V7000 LUNs, and reboot to the alternate Solaris, and re-mount the LUNs.
3)What further compounds the issue is that when we start another the database in the second zone, we see another huge performance degradation.
4)We have logged a call with ORACLE. They requested us to gather information which was analysed by them. They did not find anything wrong with the way ORACLE was installed and the setup of the instances.
On Disk 1 we did a Solaris 10 8/11 (Update 10) installation, which we patched with the April 2013 CPU patchset. In this Solaris 10 global zone we built two native Solaris 10 local zones. The Oracle 10g databases were built in the zones (same configurations settings) not in the global zone, onto UFS LUNs. The database in its entirety lives on the IBM V7000 data array. This works fine.
We then received our next T4-2 server.Loaded it again with Solaris 11.1, and upgraded to ORACLE 12c SE Release 12.1.0.1.0 - 64bit12xxx seeing that Oracle 10 is not certified on Solaris 11. To keep things simple, we built two small databases in the ZFS root pool. The complete system now resides on one disk no UFS LUNs to consider, no Fibre Optic fabric, no CISCO switches, no IBM data array, BUT we get the same problem. The system will run for some time and then slow down drastically. Starting the second database slows the system down abnormally.
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
View 4 Replies View Relatedi am using 11.2.0.3 version of oracle. We have recently migrated to 11g, after 1 month of smooth and comparatively better performance, we are suddenly facing performance issues with our database and it got crashed twice within 5 days. even we didnt push any new code to our database in recent past, atleast after the 11G migration. And after getting feedback from the ORACLE corporation guys , they pointed out about the default database stats gathering job, which was eating most of the CPU, because of the default degree mentioned So it was running in 160 parallel threads causing resource starvation.so we reduce the degree of the stats gathering job to 8 .
But the database crashed again two days back, and rebooted within 3 mins to back to normal, even after this default degree changed to 8. This is happening due to any specific application related sql or anything else.
I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.
View 2 Replies View RelatedI understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
I have installed Oracle 10g on one system and Oracle developer on another machine, means i have different machines for DB Server and Application server. It all working excellent inside the company premises, but if i want to access my Oracle DB and application server outside the company then it gives me problem..how to access application (forms and reports ) remotely outside the company...having same db.
View 2 Replies View RelatedWe have database with multiple fields containing NULL values and in many queries we have NVL function which in turn is suppressing the index usage when in fact it is really essential (selectying very few rows from massive data) instead of creating lot of Function based indexes (NVL) or composite indexes with (nullable_column, constant) I am thinking of settting a default value for most of the fields In that regard I have some queries :
Which approach is better - setting default value for the fields or updating the fields with default value and modyfing inserts to take care of future data? Though altering table and modifing column to set default value looks better considering it will take care of data inserted in the future, it will invalidate the subroutines.I understand in 10g both statement will generate lot of undo (though in 11g, I heard things changed for setting default value of a column) How to take care of all the queries which are using the criteria 'where column1 IS NULL' or 'where column1 IS NOT NULL'. It will be really difficult task to manually change each and every occurrence of such condition even using user_source.
Finally for numeric values say for ID field which starts from 1 onwards 2,3,4 etc, we can set 0 as sensible default so that the performance is not affected.
Is there such precaution for varchar2 field purely from performance point of view?
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
View 2 Replies View RelatedBy default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.
My question here is how to check the table has changed 10% in database?
which tools are available for monitoring load of the database?
View 4 Replies View RelatedAs there was a database performance, we started to analyse the issue by running ADDM using below.
@?/rdbms/admin/addmrpt.sql.
The result of running ADD is below.
DETAILED ADDM REPORT FOR TASK 'TASK_64718' WITH ID 64718
Analysis Period: 10-APR-2012 from 06:00:15 to 07:00:22
Database ID/Instance: 324353546/1
Database/Instance Names: JACK/jack1
Host Name: fdsfggwe001
[code].....
There was no significant database activity to run the ADDM.The database's maintenance windows were active during 100% of the analysis period.The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.
Current parameters set in database are below
statistics_level ---- string
timed_statistics --- boolean
1.Do i need to set the initialization parameter to some other value?
2.should i run the below command before executing the ADD report ? exec dbms_workload_repository.create_snapshot();
I support to get a handle on statistics collectionn in their data warehouses. It seems developers have created several ANALYZE TABLE jobs but the code for these is not stored as PLSQL in the database and thus it is problematic for statistics collection. Even if we collect stats that way we want, these jobs kick in and overlay the statistics we collect every day.
Is there a way to AUDIT ANALYZE TABLE? I can't find it anywhere.
Is there a way to globally turn of ANALYZE TABLE in a 9i database?
A website requires to display consolidated data from databases located in different geographical regions (India, London and New York). The application server for the website is hosted only in one location India. What are the techniques that can be used for faster retrieval of data from all 3 databases?
Note: There is no need of real time data retrieval from different regions; however the user should able to view the updated data at predefined intervals.
its possible to have multiple LGWR processes for a single database.If its possible how does the multiple processes write from redo log buffer to online redo log file.
View 3 Replies View RelatedI've installed Grid Control (aka OMS) 10.2.0.5 and try to look at the Database Performance. But instead of information I receive headers and blank picture in the place where usually all charts are shown. As a matter of fact it looks like the page tries to access that data source, but fails in that and shows just a symbol of picture that was not loaded and all menus from that page.
View 2 Replies View Related