How can i increase the sga_target in this case.Im unable to create a pfile.
SQL> startup ORA-01078: failure in processing system parameters ORA-00821: Specified value of sga_target 2048M is too small, needs to be at least 4112M SQL> startup nomount
[code]...
The $ORACLE_HOME/dbs location has a spfile.
more initQR01MRA1.ora SPFILE='+DB_DATA/QR01MRA/spfile'
on 11g R2 on Win 2008, I want to duplicate my target DB which is on a a remote server using RMAN backups. The destination is on local server. I will run RMAN on local server.
In initnewdb.ora, I should add :
# Convert file names to allow for different directory structure if necessary. #DB_FILE_NAME_CONVERT=(/u01/app/oracle/oradata/DB11G/,/u01/app/oracle/oradata/NEWSID/) #LOG_FILE_NAME_CONVERT=(/u01/app/oracle/oradata/DB11G/,/u02/app/oracle/oradata/NEWSID/)
my questions :
1-In my case would it be :
# Convert file names to allow for different directory structure if necessary. #DB_FILE_NAME_CONVERT=(\remoteserver:/u01/app/oracle/oradata/DB11G/,/u01/app/oracle/oradata/NEWSID/) #LOG_FILE_NAME_CONVERT=(\remoteserver:/u01/app/oracle/oradata/DB11G/,/u02/app/oracle/oradata/NEWSID/)
2- should we keep the convert parameters in init.ora file after duplication for always ?
I've installed OEM grid control 11.1.0.1.0 on server "A" which has an Oracle Database version 11.2.0.1.The OS on Server "A" is OEL 5.5. The grid control installed on this server is working fine and I've deployed agents on remote hosts which I am able to monitor them successfully using this grid control.
Now I would like to add the server ("A") on which the grid control is installed to be monitored by the grid control. is the same way as we deploy agents on remote hosts, but here the agent is already installed at the time of installing Grid Control.
O.S Version HP-UX B.11.31 U ia64 Oracle DB Version 11.2.0.3.0 Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production Storage : ASM Diskgroups -------------------------------------------------------------------------------------------------------------------------------------
I am new to Oracle, like to get clarified on below;
I have a question about GG Sequence Replication and Triggers. My main database, which I would like to replicate on another server, is highly dependent on sequences for assigning surrogate keys to every row in every table in the application. I know that I need to add Sequence support to my source database (plus supplemental logging, etc), but I'm curious about the target database.
I do not anticipate allowing Read/Write access to this database - we are migrating from 10.2.0.4 (source) to 11.2.0.3 (target) on a new platform, and I want to keep the 11g database up-to-date with our production data until it is time to begin the actual conversion of our application. My thinking is that if I use the SUPPRESSTRIGGERS dboption in my Replicat session, this should take care of the use of the Sequences for assigning the surrogate key values, and the data should add to the tables normally without any intervention by the sequences/trigger combination. I know I will have to manually "correct" the sequences on my 11.2.0.3 database whenever I want to open this database up for use, but I have a script for this ready to go.
Also, in my source database, I am using Oracle Context indexes for generic name searching - this feature creates a number of DR$ named tables in the main application schema that I am replicating (approximately 50 of them). I am assuming that I should EXCLUDE these tables from the replication, as the context indexing should automatically update them as changes to the underlying data are applied via the replication of the indexed tables.
Using Golden Gate to replicate a database (Encrypted Tablespace, Oracle 11.2.0.1, Windows 2008) to a different database server (No Encrypted Tablespace Oracle 11.1 Linux)
Following error goldengate report ERROR OGG-01771 DBOPTIONS DECRYPTPASSWORD must be used to decrypt TSE data. Use TRANLOGOPTION IGNORETSERECORDS if you do not need to capture any tables that are in an encrypted tablespace.
How use it
GGSCI> ENCRYPT PASSWORD "shared key" Add an entry to the Extract parameter file to decrpt the new shared password
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
I am implementing GG 11g r2 for 12C database. But i am getting below error. My question Why Goldengate needs specific package ... Since this is homogeneous & heterogeneous.
/u01/12c_database_software/goldengate/dirtmp. 2013-08-30 05:28:44 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext1.prm: Positioning to Sequence 66, RBA 25067536, SCN 0.0. 2013-08-30 05:28:44 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, ext1.prm: ORA-06550: line 1, column 7: PLS-00201: identifier 'SYS.DBMS_INTERNAL_CLKM' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored. 2013-08-30 05:28:44 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
Database details ---------------- SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM' and object_type in ('PACKAGE');
no rows selected
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM';
We are facing a project where it is mandatory that the migration (from 9i to 11g) happens without any downtime. We thought about using Goldengate do to this migration. But i would like to listen to somebody who already did such kind of migration (i never used goldengate before). The basic steps to do such migration would be:
1) Install the Goldengate client on both source and target 2) Export only the metadata (structure of the table, for example) from source to target (here is one point of doubt of mine. This export can be only done using exp/imp?) 3) Perform the initial load from source to target (here i have another doubt: It it possible to perform an initial load from a whole database?) 4) Configure manager, export and replicat to perform the migration with the source database open in read-write
With the steps above, would i be able to perform a migration without downtime? What other considerations do you have?
I have a confusion with MEMORY_TARGET and MEMORY_MAX_TARGET parameter. if i set SGA_TARGET, SGA_MAX_SIZE along with MEMORY_TARGET and MEMORY_MAX_TARGET then how oracle will manage the memory? Because as per my understanding if we set MEM
We are migrating from a 9i db to 11g and we've been testing our apps on a similar (but not exact) machine as our production box.
Normally when we take a full export of the production data (on 9i) and import it into another 9i DB, the tables and indexes are created with the initial size large enough to hold the entire table. We also do our export with the compress extents param set to 'Y'.
However, we've noticed that when we import our data into the 11g DB, that tables are being created with multiple extents...sometimes up to 10 or 15. This seems to happen even with tables that don't even have extents on db that the export was taken from.
There ARE some differences in our 11g DB that i imagine might be the culprit, i've just been unable to narrow one of them down.
the differences i know of are:
a) the target DB has locally managed tablespaces while the source 9i DB had dictionary managed tablespaces b) the block size is larger on the target 11g DB. 8192 vs 2048 c) the nchar character set on the source DB is AL16UTF16 and the target is UTF8 (we actually only have an nchar column in one of our tables...and also, the UTF8 setting was actually a mistake that we're correcting this weekend with a fresh DB and fresh import)
What would cause the import to produce all these extra tablespaces?
We are using the 11.1.0.7 database, we implemented the Memory_Max_Target and Memory_target in the database.Here is the value of the memory parameters:
SQL> show parameter memory_
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ hi_shared_memory_address integer 0 memory_max_target big integer 3G memory_target big integer 2G shared_memory_address integer 0
We want to increase the value of the Memory_target=3G, means, I want to increase the value of the memory_target upto Memory_max_target by using below command:alter system set MEMORY_TARGET=3G scope=both SID='OLTP1'; but I am getting below error:
ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00846: could not shrink MEMORY_TARGET to specified value
I tried to give the memory_target value less than the memory_max_target value like:alter system set MEMORY_TARGET=2900M scope=both SID='OLTP1'; but get the same error:
ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00846: could not shrink MEMORY_TARGET to specified value
when i was loading a large file sqlldr is committing after every record, when i used
OPTIONS(bindsize=20000000, readsize=20000000 , rows=200) LOAD DATA LENGTH SEMANTICS CHARACTER APPEND INTO TABLE TABLE_NAME TRAILING NULLCOLS ( ) still the same.. no change at all..
Quote: Oracle Database 11g Release 2 (11.2.0.2) New Features in Oracle XML DB
The following Oracle XML DB features are new in Oracle Database 11g Release 2 (11.2.0.2).
Default Storage Model for XMLType
The default XMLType storage model is used if you do not specify a storage model when you create an XMLType table or column. Prior to Oracle Database 11g Release 2 (11.2.0.2), unstructured (CLOB) storage was used by default. The default storage model is now binary XML storage.
We have a application which works fine on r1 but not on r2 due to this change, we are going to investigate resolving the issue on the application in the future, in the mean time we need to be able to use the CLOB storage.
Does any one know where we can change this functionality back to the pre 11gR2 change?
I wanted to export a table "emp_production" from Production database then import it as "emp_datawarehouse" in Data warehouse database.Both tables has same structure. I have granted IMPORT FULL DATABASE & EXPORT FULL DATABASE privileges to both schema
Currently my oracle database character set is we8mswin1252 and it is only containing English data as well as spatial data (which is in English of course). I would like to change the Database character set so it could accept Arabic characters.
I have checked the below command on a test DB and it worked fine, but I want to know if it's recommended as a best practice when changing the character set to accept arabic and this won't corrupt my old entered Data ?
SHUTDOWN IMMEDIATE STARTUP RESTRICT ALTER DATABASE CHARACTER SET INTERNAL_USE AR8MSWIN1256 SHUTDOWN IMMEDIATE STARTUP
I can not change the parameters RESOURCE_MANAGER_PLAN, at first, I set the parameters to DAYTIME,but when I restart my db,the parameters hold old values named MAXCAP_PLAN. Why?
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for 32-bit Windows: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
What is best practice to change small disk D:? I am beginner with Oracle. 10g on W2008. 5 datafiles (all indexes,second data file, 2 undotabs)*.dbf (34;30;1;34;12 GB) is on D:. Part of tablespaces (1 data, 1 undo)has files on c:.
I. 1.Shutdown 2008 server. 2.Copy D: image with GHOST to USB, network. 3.Connect new D, create RAID. 4.Restore image to D. 5.Start 2008 server.
II. 1.Stop application. 2.CONNECT AS SYSDBA 3.SHUTDOWN NORMAL or (IMMEDIATE)? 4.Copy files *.dbf at OS level from d: to ... USB disk, network. 5.Shutdown 2008 server. 6.Change disks, create RAID in BIOS. 7.Start W2008. Is Oracle at this moment in SHUTDOWN mode? 8.Copy back *.dbf to new D: (with directory structure). 9.STARTUP Oracle.
I searched and found that it has something to do with the SGA parameters. I saw that the shared_pool_size and the sga_target paramters are set to 0...Also there are certain SQLs hanging at some point. I thought I should change the above mentioned parameters.
My question now is, can I use the Alter System statements from the SQL Plus to change these parameters, and do they change immediately or do I need to reboot the Oracle instance for those changes to take effect? I would like to do:
alter system set sga_target=400m; alter system set shared_pool_size=200m;
This facility has one last 10g database and a very problematic tablespace and last datafile associated with it. The tablespace was set up with INITIAL_ EXTENT of 131,072 (128K) instead of the more 'normal' 4,194,304 (4M) and NEXT_EXTENT of 262,144 (256K) instead of 4,194,304 (4M).
More worryingly, the datafile has INCREMENT_BY set to 1 (8K) instead of 1,280 (10M) or 2,048 (16M).Has anyone ever updated sys.ts$.dflinit and sys.ts$. dflincr to modify the INITIAL_EXTENT and NEXT_EXTENT, and sys.file$.inc to modify the INCREMENT_BY?
SQL> update t set a = 1 where b = 2; -- must have redo record 2 rows updated. SQL> rollback;
the above redo record that uncommit changed must be written from redo buffer to the online redo logfile. why Oracle write the redo record that uncommit changed to the online redo logfile ? when it will be used?