For migrating the database version 10.2.0.3 on Windows to version 11.2.0.3 on Linux i am using the transportable tablespace option.After the import of metadata and copy of tablespace datafiles to destination the sequences and functions are not available. Does it work if i later export and import the sequences and functions separately ?
I am using transportable tablespace to migrate from 10g to 11g from Windows to Linux System.
The steps are as follow On 10g make the tablespace read only mode and export the metadata information and copy the tablespace datafiles to the 11g server. Now on 11g when i am importing the exported metadata it says that the user does not exist and if i create the user and tablespace it does not work as it says tablespace already exist.
For transportable tablespace do i have to create the user already on 11g ? If yes then i also need to create the tablespace which i need to assign to the user.
I'm trying to migrate an Oracle Enterprise Server database v9.2.0.1.0 deployed on Windows 2000 Server 32-bit to v10.2.0.5.0 on Windows 2008 R2 64-bit.Basically, I'm following instructions provided by this document:[URL],,,
1) Updated v9.2.0.1.0 to v9.2.0.6.0 on the old server 2) Backed up the database as follow: SQLPLUS /NOLOG CONNECT / AS SYSDBA ALTER DATABASE BACKUP CONTROLFILE TO TRACE; SHUTDOWN IMMEDIATE;
3) Installed Oracle Enterprise Server 10.2.0.4 on the new server and updated to v10.2.0.5
4) Copied trace files, data files, control files, archive logs and init.ora to the new server. Redo logs have been not copied since the ARCHIVELOGMODE is enabled on the old server.
5) Created an Oracle service on the new server as follow:
We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0. After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 30 where as after migration it become around 200 files per day. I have run statspack report on this database. Report is saying that db block change & disk write is become very high. Parameter timed_statistics has also been set to FALSE. Even then there is not any reduction on no. of log file generation. I had used import export for upgrading the databases.
I want to load lakhs of records into a table. My problem is when after loading the ¼ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?
I am familiar with tool Netca. However there is one more utility exist for the same functionatlty which is netmgrI checked with many DBAs for the exact difference, however I did not get the best answer from them. I also have checked in google but not exactly got the difference. list the exact difference between those 2 tools (netca, netmgr)
I have been working on Oracle for the past 1 year and using it as a backend for my project. We are planning to move to SQL2005. I have never done this before and i;m keen on learning and understnding the proccess to have a smooth migration. if there some tools that do this.
I am currently posted in a project where the application is using Forms 6i as front end and Oracle 10g as a backend. The basic requirement of the project is to migrate the froms from 6i to 10g (or) 11i.
whether there are any tools available in the market for migrating from forms 6i to 10g, if it is then how effective the tool will be. i.e. how much manual intervention we need and what are the common problems we will face during migration.
While migrating Forms 6i to 10g i have come across one unique behavior
1. In Forms 6i- Suppose we have a text item of datatype char and length 5. Now copy a text of 10 characters 1234567890 and paste it in the text box and it will automatically trim and take 1st 5 characters(12345) and show in text box.
2. In Forms 10g- Suppose we have a text item of datatype char and length 5. Now copy a text of 10 characters 1234567890 and paste it in the text box and it nothing happens. Character length is more than 5 then it does nothing.
I need to implement the 6i behavior in 10g.(apart from validation trigger).
We have a large application consisting of almost 100 forms(forms 6i). We need to convert it into 10g. Keeping in mind that this is our first exposure to such migration
My manager has a couple of 2-proc dual-core Opteron servers with 8GB of RAM and RAID controllers. These servers currently run Solaris 8, but he wants to migrate them to a newer Linux systems.
I have to migrate a 9i DATABASE from one platform (HP-UX V1 PARISC) to another platform (HP-UX V3 Itanium) and to upgrade it to 11Gr2. I know that multiple scenario exist (Ex:Exp/Imp,DataGuard,GoldenGate,Dblink/Direct Load,etc)
Scenario:
1-Activate force logging and suplemental logging in the source database. 2-Export the source database SCHEMA in consistent mode and import them in the 11gr2 database. 3-Record the scn of le last user transaction applied before the export. 4-Use LOGMINER to capture all the transaction applied after the export. 5-Stop the source database, applie the captured transaction by logminer and then switch the user to the new database.
I thought that the down time will be minimal, it will take only the time of reapplication the committed transaction captured by logminer.
It's feasible to do that or logminer have some limitation ? It's recomended or difficult to do that ?
I am using oracle 8i (8.1.7) with forms 5 and reports 3 on a windows 2000 platform. Now I want to migrate from 8i to 10 g. Will the applications in form 5 and reports 3 work or should I have to upgrade the forms and reports as well.
I am migrating MySQL query's to oracle (sqlplus). Tell me what is the below code doing and equivalent for this code in oracle.
declare @start_date datetime select @start_date='$first_date' declare @end_date datetime select @end_date='$end_date' This is followed by select distinct ' ', column from my_table;
I tried a lot of ways (set @start_date etc)but nothing really works.
We have a requirement where we need to migrate data from a legacy system (Oracle 10g) to our database (Oracle 10g again).
The thing is that the table structures will be slightly different between the two. Our current structure has some fields added/removed as well as some tables added/removed. But by and large the databases would be similar (about 75%).
While migrating I would need to map the fields in the source and destination manually. I would also need to populate some data to the fields that have been newly added.
Is there any way to do this? I was checking out Oracle SQL developer but could not proceed much.
I have migrated from Oracle 8i (8.1.7) to Oracle 10g, but when I execute a query in 8i without any order by clause, I get a result in ascending order. The same query when executed in 10g gives a result which is not ordered. How to get an order result in 10g. There are many forms and reports which use lov which are not ordered. Can I set the ordering at the database, so that I do not have to alter all the forms and reports.
I have also migrated my forms from 5 to 6, but the combo box in some forms in 6i do not appear at run time. How I can solve this problem. I have attached an forms5 .fmb file/.
We're migrating our first APEX application from one server to another. The export and import are done, but we're having problems with supporting objects (we get a login prompt but no images). Okay, so I figured out how to get the supporting objects into a script.
When I run it I get:
declare * ERROR at line 1: ORA-20001: Package variable g_security_group_id must be set. ORA-06512: at "APEX_040100.WWV_FLOW_IMAGE_API", line 12 ORA-06512: at "APEX_040100.WWV_FLOW_IMAGE_API", line 32 ORA-06512: at "APEX_040100.WWV_FLOW_API", line 10508 ORA-06512: at line 6
connecting as the parsing_schema user to run the script. I did that but still get the same error.
I have a database in my local machine that doesn't support Turkish characters. My NLS_CHARACTERSET is WE8ISO8859P1, It must be changed to WE8ISO8859P9 , since it supports full Turkish characters. I would like to migrate character data using a full export and import and my strategy is as follows:
1- create a full export to a location in network,
2- create a new database in local machine that it's NLS_CHARACTERSET is WE8ISO8859P9 (I would like to change NLS_LANGUAGE and NLS_TERRITORY by the way)
3- and implement full import to newly created database. I 've implemented first step, but I couldn't implement the second step. I 've created the second step by using toad editor by clicking Create -> New Database but I can not connect the new database. I must connect new database in order to perform full import.
We are in process of Migrating our database from Oracle 9i to Oracle 10g.
I am getting below error while parsing XML in 10g.
ORA-31011: XML parsing failed ORA-19202: Error occurred in XML processing LPX-00601: Invalid token in: '//soap:Envelope/soap:Header/coHeader/company/text()'
Same code works fine in Oracle 9i database with same XML. Is there any difference in XML TYPE functionality in Oracle 9i and 10g?
I have exported data of one user an importing into another schema at another server. when i am trying to imoport it is working fine for quite no of imports into tables, but after some time it starts giving me below mention error...
IMP-00008: unrecognized statement in the export file: < IMP-00008: unrecognized statement in the export file: < IMP-00008: unrecognized statement in the export file: <ے IMP-00008: unrecognized statement in the export file: +A IMP-00008: unrecognized statement in the export file: [code]...
I have a requirement to read flat text file(around 15000 lines) residing at a client location from DB server and write into a table in One cell.
I tried UTL_FILE and DBMS_LOB but, i am not able to access client location to read the file as it reads path from Oracle Directory.
eg. my client path is 198.168.1.1 and my DB server is in unix say 192.168.1.10. file location is: \192.168.1.1shareabc.txt So I created One Oracle directory as MY_DIR having DIRECTORY_PATH as '\192.168.1.1share'. But both UTL_FILE and DBMS_LOB is not able to access the file.
Error Message: ------------- Unable to process CLOB -22288 ~ ORA-22288: file or LOB operation FILEOPEN failed No such file or directory
Few Details for reference: ------------------------- File Location: \192.168.1.1shareabc.txt Unix DB Server location: 192.168.1.10 Table : Test (filename varchar2(30), Content CLOB) Oracle Dir: MYDIR Directory_Path: \192.168.1.1share
I've a question regarding difference of character sets, while taking a export(logical backup) of database on directly to server(linux RHEL 2.1 AS) and export on a client (windows xp prof machine, where only a oracle 9i client is installed). On server it seems to fine and okay, but on client node i'm getting following error for almost all tables.
EXP-00091: Exporting questionable statistics.
My question is :
[1] Is it creating any sort of problem, if later on i import the data which was taken from client node.
[2] Why there is a difference(marginal) in dump(.dmp) file size.
[3] Is there any way to overcome it, or it is the natural behave of it. Means not a problem.
[4] If i'm using a long or blob as datatype for some of my table,is they have any problem if i persist like above.
Additional Information about character sets On server node :
Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8ISO8859P1 character set (possible charset conversion)
On client node :
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses US7ASCII character set (possible charset conversion)
load data infile 'trlc.csv' replace into table trlc fields TERMINATED BY '|' TRAILING NULLCOLS (est_no,right_no,maj_auth,weight,idm_ht,c_date,P_tkt)
The rows get inserted successfully. But the result sets are different, for example: When I do a select in SQL Server,'select len(weight) from trlc;' , I get the length as 0. But when I do a select in oracle database, I get the length as 1. Also, the result set varies for the query below:
select * from trlc where weight=' ';
(SQL Server returns 1 row but Oracle returns no rows)
Do I need to mention any conversion code for the weight field to accept ' ' value?
I've inherited a 10.2.0.1.0 instance running on a windows 2003 box; running fine, no problems other than system has been in production since 2005 and has gotten pretty old and tired. This old box has one tablespace on it... called "gateway".
I've installed 11.2.0.1 on a new (Windows 2008 R2 Enterprise 64-bit) server and created an empty database also called "gateway".
Now to move the data and views, objects, everything.
I've read up on a variety of migration techniques (oops, I mean "upgradation" LOL) and can follow the steps...
In short, I want to pull everything off of server a (10.2) and put it into production on server b (11.2). There seems to be quit a few options.
1. install 10.2 on my NEW server (server b), move the data over and get everything running, then install 11.2 and have it upgrade the database as part of the installation process. 2. drop the empty tablespace on server b, stop the database on server a, copy the files over from the old to the new home, run DBUA or set the compatibility attribute... 3. run some type of server a to server b utility that can bridge the two over the network. Some type of mirroring technique? 4. run file export scripts on server a, copy files to server b and run various import scripts
I tend to think that option 3 would be the best because both instances are in great health and are running right now. Is there a mechanism that allows the 11.2 instance to see and upgrade from a different server?