Globalization :: Data Migration From WE8MSWIN1252 To AR8MSWIN1256
Feb 19, 2013
I have Arabic data stored into below two encodings in oracle AL32UTF8 database
1 Million rows into WE8MSWIN1252
.5 million rows into AR8MSWIN1256
in all cases I like to convert 1 Million row of WE8MSWIN1252 into AR8MSWIN1256. I could convert the data encoding from 1252 to 1256 using SQLdeveloper. But no luck using oracle export/import utility (both exp and expdp)…. I’m thinking may be certain locale is required for export/import to work.
Also my company said SQL developer is free utility may not be supported by oracle so use export and import for this, I need to convert only one table.
I have a database in my local machine that doesn't support Turkish characters. My NLS_CHARACTERSET is WE8ISO8859P1, It must be changed to WE8ISO8859P9 , since it supports full Turkish characters. I would like to migrate character data using a full export and import and my strategy is as follows:
1- create a full export to a location in network,
2- create a new database in local machine that it's NLS_CHARACTERSET is WE8ISO8859P9 (I would like to change NLS_LANGUAGE and NLS_TERRITORY by the way)
3- and implement full import to newly created database. I 've implemented first step, but I couldn't implement the second step. I 've created the second step by using toad editor by clicking Create -> New Database but I can not connect the new database. I must connect new database in order to perform full import.
I want to convert my database characterset from WE8MSWIN1252 from any UNICODE, because i have to transportable tablespace to the destination, the destination is unicode and source is WE8MSWIN1252. While importing transportable tablespace i was not able to do because of this reason, so i want to convert lets say source from WE8MSWIN1252 to unicode.
I am unable to insert and even display euro symbol from server as well as windows client.
Following are the details of my database server
SQL> select * from nls_database_parameters; PARAMETER VALUE ------------------------------ ---------------------------------------- NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET WE8MSWIN1252
[code]....
1) When I try to insert € from Db server (using putty) using Alt+0128 it does not print anything (nothing gets typed on the screen). Not even junk characters Also following query does not print anything
SQL> select chr(128) from dual; C -
2) while I set NLS_LANG on client and try to insert €, Alt+0128 produces a question mark symbol And following query displays junk character
SQL> select chr(128) from dual;
C - Ç
3) Regardless when I inserted couple of rows and tried UNISTR following was the result
We have our current production running on 9i and eventually want to migrate to 11g-r2. But the challenges are as follows:
9i production is running in San Francisco data center. 11g-r2 Production already setup up and running in Atlanta data center.
The database size is around 2 TB running on 9i. We are looking to transfer this to 11g-r2 and wondering, what options we have at our disposal: I was looking into EXP/IMP, but somebody said, dblinks will be much faster and reliable than EXP/IMP.
we're planning a data migration from an application (oracle-based) to another (also with oracle db).
the origin is a ca. 80 GB database. so lots of millions of records are to be migrated. (before loading records into the destination tables, they have to be transformed).
the current concept is to receive all origin data in xml files, load them in a staging area (an own migration scheme in oracle), transform and load them into the destination tables.
we have three days for the whole migration (including extract from origin database, transform, load, backup after completion...).
my question is, that a migration with xml-files is a good concept. i think xml processing is much slower than doing the same with csv files. my proposal to migrate an oracle dump (so we got the original data in our staging area) was declined.
is migration mass-data with xml files good or are there performance or other issues?
I have created a new table, so that instead of having 26 columns for payment amount for each week, I want to have one column for pay amount and one column to represent the week as below
employee_id payroll_pay payroll_pay_week
how do I migrate the data from old table to the new table?
I need to migrate data from Mysql database to Oracle11g database.
a) is there any method available to import the all the sqls like table script,constraint scripts,data(insert ) script from Mysql.so that we can apply the sql directly to the oracle schema after making necessary changes(like datatype).
b) Is there any free tool available for the migration.
I need to generate in report using PL/SQL code for counting the number of rows of all the tables from source and target database..The report should consist of following columns..
table name|source table row count|target table row count|mismatch|..provide me the PL/SQL code?
I got an assignment to create Oracle 11g db. I will be provided the full datapump export dump of an Oracle 10g db in linux. I need to import it to 11g Database in Windows. I have no information about the tablespaces, users etc I have created db with system,sysaux,undotbs temp and users tablespaces.
We are planning to migrate data from an application called clintrace to another application called argus safety. Both the applications are related to pharmacovigilence safety operation. Both the applications functionality are similar. So both the database are having the same data though the table structures might be different. Both the database are oracle clintrace db is 9i and argus db is 11g.
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue. 2) I tried increasing the a.large_pool_size set to 512M b.sga_max_size set to 8G c.sga_target set to 8G from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,1,blk * End MDD root info (do not edit) set noexec_user_stack=1 set noexec_user_stack_log=1 * IBMdpo vpath_START (do not remove) * default SCSI timeout is 60 seconds * uncomment to change SCSI timeout * set sd:sd_io_time=0x1e forceload: drv/vpathdd * IBMdpo vpath_END (do not remove)
set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
I have upgraded oracle database from 9i to 11g using export and import utility. After migration we are facing performance issue in report generation, We have observed that First execution of report is taking very long time and when we generate the same report 2 -3 times there is considerable change in the execution time and it is more better than the first execution.
2 days back I have restarted the database and found the same issue. There are around 300 Reports and it is not possible to generate all the reports 2-3 times every time we restart the database.
We should migrate our 10gR2 single-instance database with conventional file system to a two-node 11gR2 RAC on ASM (on same Windows Server platform…).
How can I migrate my production database using data pump? I have full data pump export from target but I don’t know how to import, whether the scheme after scheme, full import, do I need to first create manually tablespaces on destination, whether to exclude the index, constraint, statistics?
It's been a while since I worked with SQL Plus . I am using Oracle 11g. We are working on a legacy data migration project. I have a table of records with circular dependency records. i am trying to identify the records. I have the foll. columns- Product,Source,target. I want o identify the records which form a loop. For e.g.
Source Target A B B C C D D A
Last record forms a loop-I need to identify these records. My query is below-
SELECT DISTINCT SOURCE,TARGET FROM RULESELIB WHERE CONNECT_BY_ISCYCLE=1 CONNECT BY NOCYCLE SOURCE=PRIOR TARGET;
I ran this query on 2 tables- one with 75000 records and the other with 25000 records. It works fine on the table with 75000 records completes within a minute but it does not complete on the other table. I can't seem to be able to figure out the issue with the query or is there something about the data that is causing this query to loop infinitely?
We have production DB 10g with character set US7ASCII. This DB stores Arabic data and English data.Production DB located in HP unix Operating System.
When I query data from DB through SQL developer data is shown as Junk or Unknown characters(Square Boxes).
Client (Workstation from where query is issued from SQL develope- Windows XP OS) Settings: NLS_LANG = AMERICAN_AMERICA.US7ASCII
In Client workstation Oracle 10g client is installed from where I used to query data through SQL developer. The problem is I am unable to see Arabic characters in the sense that it is displayed as Junk character. However English characters and Eneglish numeric values are displayed properly.
I tried below way to make sure that data is not corrupted: Converted "Name" column to hex value (rawtohex) and displayed its HEX value. Executed below query in UTF-8 DB.
select UTL_I18N.RAW_TO_CHAR(hex_value_of-name) from dual;
This displayed Arabic name properly in UTF8 DB.
Character set for this production DB can not be changed at this time. There are many applications which is based on this DB. All these applications are well capable of converting Junk data to Arabic to display in application.
My concern is: What I should required to do to view Arabic data properly through SQL developer? Is there any settings needs to be done at my client workstation?
I am using oracle 10 g database on windows xp. I have backup of data contains data in local language (Marathi). I want read this data in oracle itself.Which character set need to choose?
I have a strange problem with query with like and %.
When I run this script:
ALTER SESSION SET NLS_SORT = 'BINARY_CI'; ALTER SESSION SET NLS_COMP = 'LINGUISTIC'; -- SELECT * FROM NLS_SESSION_PARAMETERS; -- drop table test1; CREATE TABLE TEST1(K1 NVARCHAR2(80));
[code]....
When i change datatype to varchar2 this code work correct.
The execution plan:
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SQL_ID d3d64aupz4bb5, child number 2 ------------------------------------- select * from TEST1 where k1 like N'Ł%'
[code]....
Note - dynamic sampling used for this statement (level=2)
I have created a procedure which sends e-mail using UTL_SMTP. The procedure has a part in which we add the attachments to e-mail. Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
IMPDP-ing a dump file that someone has handed me over into Oracle XE results in special characters, i.e. Umlauts, being messed up.
In a hex editor, the dump file shows a) the token WE8MSWIN1252 near the beginning, but b) Umlauts obviously being encoded in DOS 850, for example "König" is encoded as 4b 94(!) 6e 69 67. Does this prove that the dump file is badly formatted and that I have to resign myself to the complicated approach mentioned at the end of [URL]...
Oracle version Oracle Database 11g Release 11.2.0.1.0 - 64bit Production running on CentOS Linux release 6.0 (Final), kernel 2.6.32-71.29.1.el6.x86_64.
I am having a hard time spooling a file and displaying special Brazilian characters, even though I can see them correctly in SQLDeveloper: LEOPOLDO COUTO DE MAGALHÃES JÚNIOR
Spool: LEOPOLDO COUTO DE MAGALH?ES JUNIOR
I've tried changing the NLS_LANG at the session level, but that cannot be done. I don't want to change the default language of my DB, but really need these characters to display correctly in a file.