I have SQL Server database I would like to migrate into Oracle. The database supports a large application. It is around 10GB. I requested a new instance but was advised I would have to pay for that but if I asked for a new Schema it could go in our current company instance. I am fine with that since it wont cost more money if I just add a new schema to our Company Oracle instance. Just curious what is the advantage of getting a new instance compared to creating a new schema for 10 GB of data?I assume the advantage of creating a new instance is our Schema (in new instance) and work will have its own space/house and can grow in size without any issues?
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
- one ASM instance - X DB instances - each DB instance uses 2 or more dedicated diskgroups from the ASM instance - there is one diskgroup named FREEDISK that contains spare disks
On each DB instance you can see:
- the list and global parameters of all diskgroups using v$asm_diskgroup view - the list and parameters of all disks the instance is using with v$asm_disk view
So my question is: how (if this is possible) to know the list of (spare) disks in FREE DISK disk group?
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
We have an application with many separate databases (one per customer). Given they share the same business requirements (service hours, change mgmt etc), we're interested in potentially consolidating the separate DBs (which are relatively small) into separate schemas within a fewer no of databases to reduce the overhead.
Our issue is that the application is hard-coded to use a specific administrator and application connection user name. Changing this is unfortunately not an option.
Given this limitation, is there any possibility to map a generic user into a customer-specific schema based on the database service that they connect to? Each customer connects to different database services but may use the same user name. We considered using private synonyms but this seems to acheive the opposite (i.e. many different users could connect and map to a single users schema). One thing to point out is that where there is a single user name, it is acceptable for a single password to be used across the different customer DBs as they will be a single admin/user.
I would like to create a table in another schema(CBF) as already exist in my schema(TLC) without data but related indexes,synonyms and grants should be include.
How could I do this without using export import. I am using TOAD 9.0.1.
move the tables with data present in the user scott(full) to another schema named test. In my case scott is in user tablespace and for test schema i have created different tablespace named test_tbs.
A user is using an ad hoc tool similar to SQL Developer called PeopleSoft Application Designer.
He creates a connection to the db, then issues an alter session set current_schema = 'restricted_schema'. The connected user does not have direct privileges on the "restricted_schema" which they call SYSADM.
After changing the schema context in that manner he creates objects in SYSADM. A schema trigger is then fired and grants privileges on the new objects created in SYSADM. Doing the same in either SQL Plus or SQL Developer does not fire the schema trigger.
I think SQL Plus and SQL Dev are working as they should. Altering the session like that does not change your identity - just the schema context. But, when you examine v_$session, the connection with this other tool looks exactly the same as one from SQL Plus or SQL Dev when changing the schema context in the session.
Instead of trying to figure out what this other tool is doing, is there any way for that schema trigger to fire when using this process from one of our tools?
I have a standard schema named ABC and 600 more schema's over there in my database.They all has same table name and column name as on standard schema. But in some tables number of columns varying. So I need to compare all schemas with my standard schemas column name. I create below script but it is generating output in infinite loop.
SET SERVEROUTPUT ON DECLARE V_COLS VARCHAR2(20);
BEGIN FOR CUR_CCD IN(SELECT DISTINCT TABLE_NAME,OWNER FROM ALL_TABLES WHERE OWNER LIKE 'CCD_MAIN' [code]....
I'm trying to test moving a single instance 11202 database to single instance w/ grid infra.
Here is what I've done:
1. Install a database (11202), single instance and create a database 2. Install Grid Software only (user: grid) 3. start the cluster 4. "srvctl add database -d orclsidb -o $ORACLE_HOME" to register the database with grid. 4.1> I was able to start/stop the database with srvctl command here onwards 5. configure disks using asmlib and start the asm 6. "srvctl add asm" to register asm with grid. 7. PROBLEM ... when I try to "backup as copy database format '+ASM_DATA_DG';" from oracle user it errors out as below:
RMAN> backup as copy database format '+ASM_DATA_DG';
Starting backup at 24-AUG-12 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=134 device type=DISK channel ORA_DISK_1: starting datafile copy input datafile file number=00011 name=/fs0/oracle/oradata/orclsidb/sjc883p_indx_large_01.dbf RMAN-00571:
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 08/24/2012 08:53:29 ORA-19504: failed to create file "+ASM_DATA_DG" ORA-12547: TNS:lost contact ORA-15001: diskgroup "ASM_DATA_DG" does not exist or is not mounted ORA-15055: unable to connect to ASM instance ORA-12547: TNS:lost contactConsidering "TNS: lost contact" I tried to see if grid listener is aware of ASM instance:
[Code]....
All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=usracdb03.rwcats.com)(PORT=1522))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 22-AUG-2012 06:08:53 Uptime 2 days 2 hr. 50 min. 48 sec Trace Level off
Our Application is on Oracle 10g and we are going to migrate to Oracle 11g.
1) As of now we are using ojdbc14.jar. So if we migrate can we use the same jar or is it mandatory with 11g to use either ojdbc5.jar or ojdbc6.jar. I guess these jar version depends on the version of jdk we are using. But we are using jdk1.5 and still we use ojdbc14.jar which is working fine. So is it like , Oracle 11g can support only jdk1.5 or jdk1.6 ?
2) Is there any effect on the coding part if we change ojdbc14.jar to ojdbc6.jar ?I have seen there are some extra classes used in ojdbc6.jar.
3) Also we are going to migrate the Oracle database from Solaris to AIX. Our application lies on Solaris.So is there any compatibility problem , if our application lies on Solaris and database on AIX ?
we are moving datacenter from one place to another where existing datacenter has got ASM(RAC env) with HP storage which needs to be moved into target datacenter for EMC storage.basically How ASM can be disassociate from source HP storage then associate target EMC storage with very minimal down time.
what are all the ways and all step by steps required for the same.
Am going to migrate my databases from 10g to 11g.and am unaware of the performance issues in this and how can i increase the performance at database level and what are the possible bottlenecks.
Data migration for three tables. I have three table which are
1.npi_p_mig contain four fields (pr_id,mi_id,qty,sl_dt,fac_code) 2.np_detail(pr_id,mil_id,qty,sl_dt,facility) 3.np_ref_tab(facility,fac_code),
I need to migrated the data from based on two tables (np_detail,np_ref_tab) to new table npi_P_mig(pr_id,mi_id,qty,sl_dt,fac_code) table. i need sql script to migrate above two table to new table (npi_P_mig) .
Is the oracle migration assistant a utility with the oracle forms 6i distribution? Or do I need to download it from somewhere else? Is there a license required? Is there a disadvantage of using Oracle Application Express for the migration from oracle forms 6i to 11g.
We have an application running on Forms 5.0.6.8.0 and Reports 3.0.5.8.0 for the past 10 years... I have two main issues from the begining could not sort to-date... Issues are:
1. I use host() to call reports,for ex: a) I run a report works fine b) Now I goto any other master screen and just query some data c) Now I want to run any of the reports, nothing works.. To generate the report I have to open another instance of application then I can run the report... In this instance again same story repeats... I dont know what is the problem...
2. We are financial company, lot of dependancy on excel ... I want export reports to excel but there is no option...
We have to perform migration for our RAC database to new SAN that is at new data centre. Since OS is IBM AIX, we can take backup from mksysb for VG and and storage will migrate data from OLD SAN to NEW SAN.
But even we restore VG and SAN data but since our old SAN is emc and new SAN is hitachi, so disk path for all the asm disk group will be changed.
So is there will be any way to rename this disk path for disk group..
We have our current production running on 9i and eventually want to migrate to 11g-r2. But the challenges are as follows:
9i production is running in San Francisco data center. 11g-r2 Production already setup up and running in Atlanta data center.
The database size is around 2 TB running on 9i. We are looking to transfer this to 11g-r2 and wondering, what options we have at our disposal: I was looking into EXP/IMP, but somebody said, dblinks will be much faster and reliable than EXP/IMP.
Currently our application running with Database - oracle 10.2 and front-end SAP11.5. We are going to upgrade our database (oracle 10.2) to 11g (version not sure) and upgrading application from SAP 11.5 to 12.2. As part of it i want to know what are all things i want to take care while migration (oracle upgradation) as I'm oracle developer.
1) Is it possible to access objects in 8i database from 11g database using DB link?. (as like 10g to 8i). 2) Is it possible to access objects in DB2 database from 11g database using DB link?. (as like 10g to DB2).