what people have set for the SGA and PGA sizes for their larger usage, larger data, databases? I've been seeing one of our warehouses grow both in terms of tables (number and sizes) and users groups querying the database. We're at 96g sga and 10g pga, but was thinking in terms of 1/2 tB machine to pin some larger tables. I know we'll have SSD soon, but am seeing enourmous numbers of reports using windowing and analytic functions and in line sets being created. How big in general do you have your larger systems set to?
I did some google searches about large number of extents and ASSM. I see bits and pieces on the web. This is something I need to look at while testing an application. Not looking to go into 'why' I would use smaller extents, I just want to make sure I have what I need to look for during testing..Issues with massive numbers of extents:
1. DBA_EXTENTS query is really slow. 2. issues truncating tables (due to having to read lots of extents) 3. issues splitting maxvalue partitions and with dropping partitions. 4. if I stay away from ASSM, would this reduce these issues? Are there any other performance issues or other issues I need to know about to check when I do tests?
Any issues with query or insert wait time? The tables that would get smaller events would have thousands of partitions/sub-partitions . Most of these sub-partitions will be rather smaller.I just want to test for a variety of different cases. The 'why' will come out during testing.
We are using oracle 10g. with our code, Currently Oracle partitions are size the same way, each partition is using 10MB for data and 12MB for indexes (with the 6 default indexes); even of very few records are written in the partition.
We create partitions in advance as a part of nightly job with 10 minutes duration.Can some intelligence can be added where based on statistics we can decide the size of partition dynamically? Lot of space is getting wasted because of this reason.
when I am running a cursor and printing its data into an excel file using utl_file, the file size is nearly 50mb. But if I run the cursor and copy its data manually into an excel sheet the file size is only 22mb. I am unable to undersatnd why there is difference in file size.
I have a table T1.In that table i have a column id, i gave a number datatype for id as number(2,2). when i try to insert the value im getting an error.
SQL> desc t1; Name Null? Type ----------------------------------------- -------- ---------------------------- ID NUMBER(2,2) NAME VARCHAR2(10) NAME1 NUMBER
SQL> insert into t1(id) values(2); insert into t1(id) values(2) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(2.5); insert into t1(id) values(2.5) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(10.15); insert into t1(id) values(10.15) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(10.5); insert into t1(id) values(10.5) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.
is there a way to set the default sizes for the canvas, object navigator and properties window in forms designer so that they don't maximise when opening them. i tried to set them in the caupref and cagpref files to no avail.
I have this problem. I need to print in paper size of 14.875x11 or us std fan fold. But when I print, some part is not printed, like a size of 11x8.5. Questions:
1) Is there a way I can print in 14.875x11 without configuring the page setup to 14.875x11 and automatically print what ever i can see on my live pre viewer? 2) What should be the value of Report Width/Height if it affects the printing size?
SQL> insert into t51 values (100000000000000000.00000); insert into t51 values (100000000000000000.00000) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
I'm currently doing migration from Oracle 10gR2 RDF to Oracle 11gR2 Semantic Technology.I followed the steps on the documentation and successfully created the network using the following:
----- EXECUTE SEM_APIS.CREATE_SEM_NETWORK('rdf_tblspace'); CREATE TABLE rdf_network_trace (id NUMBER, triple SDO_RDF_TRIPLE_S); --Created SEQUENCE andTRIGGER FOR rdf_network_trace id [code]....
when I looked at my Node Ids, they were like +635762253807433724+, +6118969225776891730+. The problem is, I am not the one who is assigning Node Ids, They were automatically generated when inserting TRIPLE data to the rdf table.
Is there a technique to getting a Top-N query to work as a sub-select in a larger query -or- is there another way to generate Top-N like results that works as a sub-select?
Background:
We have a large query that is being used to build an export from a legacy HR system to a new one. Amount the data needed in the export is the employees primary phone number.
The legacy HR system allows multiple phone numbers to be stored in a simple table structure:
SELECT emp_id, phone_type, phone_number FROM employee_phones
The new HR system does allow for multiple phone numbers, however they need a primary phone number identified and stored with the employee master information. (Subsequent phone numbers get stored in alternate table.)
From a business perspective, we have decided that if they have a HOME phone in the legacy system that should be the primary in the new system, if no HOME phone, then WORK, if no WORK then CELL.
That can be represented as:
SELECT * FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')
SELECT * FROM (SELECT * FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1
SELECT phone_number FROM (SELECT phone_number FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1
phone_number ------------------- 1111111111
However, when the Top-N query is added as a sub-select in a larger query using the employee id from the larger query (WHERE emp_id = export.emp_id), it fails saying that �export.emp_id� is not a valid id.
(SELECT phone_number FROM (SELECT phone_number FROM employee_people_phones WHERE emp_id = export.emp_id ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1)
1.Any way around this? Is it possible to put a Top-N (with a WHERE clause using data from the main query) in a sub-select?
2.Any alternatives (other than Top-N) to delivering a ROWNUM=1 result with a �custom� ORDER BY statement?
Other Notes: Yes, we know we could do two queries in the data conversion first deliver the bulk data to the target table, and then update with the phone numbers. However, for multiple reasons, that is less than desirable.
I got a primary database with a logical standby database running Oracle 11g. I got two client applications, one is the production site pointing to the primary one, another one is just a backup site pointing to the logical one.Things will only be written into the primary database every mid night and client applications can only query the database but not add, update nor delete.And now, I want to apply the latest patch on both of my databases. I am also the DNS administrator, I can make the name server pointing to the backup site instead of the production one.I want to firstly apply the patch on the logical one, and then the physical one.
I found some reference which explains how to apply patches by adopting "Rolling Upgrade Method". however, I want to avoid doing any "switch over" mentioned in the reference because I can make use of name server. Can I just apply patches as the following way?
1)Stop SQL apply 2)Apply patches on logical standby database 3)let the name server point to the backup site 4)Apply patches on the primary database 5)Start SQL apply 6)Let the name server point back to the production site
If flashback is enable in physical standby database 1. If we failover at 11AM can I flash back NEW primary database to 6 AM ? 2. if I convert physically standby database to snapshot standby database at 11AM , Can I flashback snapshot standby database to 6 AM and do some works on it (DML operations) then converting the snapshot standby database into physical standby database ?
We have configured oracle one way stream between two databases. Source database is capturing the changes (No downstream configured). Configuration was working fine but destination database was lagging behind very much i.e about 15 days behind the source database. We are ok with this but the problem is now that , as per client request we have restored previous backup and open the database with resetlog option in source database. After resetlog , archivelog sequence has been changed and stream is not working.
Can I apply the previous archivelog (before resetlog archivelogs ) in destination database anyway.Source database is a production database.
I tried to clone a 2 node rac database to single instance non rac database using existing backup. I have not used connectivity to target or catalog. rman duplicate finished with below messages:
rman auxiliary sys/******@dbracdup RMAN> duplicate database to dbrac spfile backup location '/oracle/backup'; ... ... Finished recover at 25-JUL-12 Segmentation fault
And the database was in mount stage, and when i tried to open database it failed with below error:
SQL> alter database open; alter database open * ERROR at line 1: ORA-19838: Cannot use this control file to open database .
I am trying to retrieve info from multiple DBs and insert into a central DB via DB LINKS.The links are retrieved via a cursor.
However I keep coming up against 'PL/SQL: ORA-00942: table or view does not exist'..how to handle db_links using a cursor in a pl/sql block? The code is as follows:
DECLARE db_link_rec VARCHAR2(30); CURSOR db_link_cur IS SELECT DB_LINK from MESSAGING_PROD_LIST; BEGIN OPEN db_link_cur; LOOP FETCH db_link_cur INTO db_link_rec; EXIT when db_link_cur%NOTFOUND; [code]....
i am installing oracle database 8.1.7 on dell server power edge 2650 first time database successfully installed but when i want to crate new database by Database Configuration Assistant it is not working for new database creation.
we have a production database 'X'. Now i have created a test database 'T' and did'nt configured another listener to it! The issue is when i cam connecting to oracle through sqlplus i am directly connecting to Test database 'T' but not the production database 'X'----ofcourse i can login to production DB afterwards. but initially i want to access the production database 'X'.
I need to refresh a PROD database into TEST database. The PROD and TEST runs on 10g. I need a full refresh. Is there any pre req's which i should keep in mind ?.
creating the standby database from Active database using RMAN and getting the below issue after i executed the duplicate command.
Version of Database:11g(11.2.0.1.0) Operating System:Linux 5 Error: RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 12/21/2012 17:26:52 RMAN-03015: error occurred in stored script Memory Script RMAN-04006: error from auxiliary database: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor [code]....
provide any work arounds to proceed further in creating the standby database.
I have installed oracle 11g standard edition one and created both primary and standby database. now i want to know how to switch(convert) primary database to standby database.
How one should know whether RMAN is using target database control file or using separate catalog database. Also what one should do if he dont have catalog users credentials.
The script has successfully created on standby db all controlfiles and also has copied 2 data files DATA01.DBF and DATA02.DBF into the correct location. Then the errors above kicked in and stopped the rman dup process.
Import: Release 10.2.0.1.0 - Production on Wednesday, 17 March, 2010 11:07:02 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production Starting "MUBA"."SYS_IMPORT_TABLE_01": muba/******** tables=FUNCTION_NO directory=testdump NETWORK_LINK=DBLINK1 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Here we have a data guard environment with db1(db_unique_name) as primary and db2(db_unique_name) as physical standby database. Also we configured one schema on a third machine as catalog database using following steps. The steps executed in catalog database(appsdb):
SQL> create tablespace rmancatlog_tbs datafile '/u01/app/oracle/oradata/NEW/rman_catalog.dbf' size 500M autoextend off extent management local segment space management auto;
SQL> create user rman identified by oracle 2 default tablespace rmancatlog_tbs 3 quota unlimited on rmancatlog_tbs 4 ;SQL> GRANT connect, resource, recovery_catalog_owner TO rman;
RMAN> create catalog; recovery catalog created
Added tns entries of catalog database in primary and standby. Then from primary database we tried to register to catalog database. It is showing that it is registering. But every query afterwards in rman is throwing the error. Below are the steps and error:
[oracle@db1 ~]$ rman target sys/oracle catalog rman/oracle@appsdb Recovery Manager: Release 10.2.0.3.0 - Production on Mon Aug 13 21:39:32 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database: NIOS (DBID=1589015669) connected to recovery catalog database [code]....