From the source database, the chinese characters are stored in some schema table. From the csscan result, there are convertiable, truncate, data lossy character. So, I have tried to use exp/imp for the conversion. However, all chinese characters are invalided and cannot be read anymore. How can I convert them from US7ASCCI to UTF8 database?
Also, I have tried build up another database with AMERICAN_AMERICA.ZHT16MSWIN950. The exp/imp is used for conversion again. The chinese characters are readable in AL32UTF8 database.
We have an existing db (10.2.0.4.0) and forms (11.1.2.1.0) application, that we're trying to extend to support Chinese characters. We're looking to add some unicode (nvarchar2) columns to existing tables, rather than converting the whole db charset. I've pasted my environment settings below. What I've found so far in trying to create a local (ie. running the form in Builder with local weblogic running) test form, is that I can insert the chars ok (using plsql developer) and the test form can display them correctly, but cannot write them back to the database. They appear as upside down question marks in any records the form has created.
So, how to get the form to write the characters back into the database correctly? 2) The chinese chars will only be relevant to a few forms inside the app, are there any settings local to the form that will enable unicode support, rather than setting at OS level. ie, an alter session, or equivalent? 3) Oracle Reprts doesn't appear to have an nchar datatype unlike Forms, is there anyway to get Reports (generating PDFs), to include Chinese?
I now have the chars writing back to the db ok. If you do it via an INSERT statement from inside the form, it doesn't work. It appears the value is sent to the db in the normal charset rather than the national charset, and it's written as a question mark. If you pass the value from the form into a back end stored proc though (which does the insert) it works okay.
Earlie we used oracle 10g with WE8MSWIN1252 character set(single-byte character) that time the below PL/SQL block was running fine.That is we are passing 56 length character to SYS.DBMS_OBFUSCATION_TOOLKIT.DES3DECRYPT function .Now we migrated to 11g with al32utf8 charecter set.Now if we are using 56 length variables to pass the value then we are getting ORA-06502: PL/SQL: numeric or value error: character string buffer too small error.So i have changed the variable length to 86(Minimum 86 required)
But now i am getting different error
Error report: ORA-28232: invalid input length for obfuscation toolkit ORA-06512: at "SYS.DBMS_OBFUSCATION_TOOLKIT_FFI", line 84 ORA-06512: at "SYS.DBMS_OBFUSCATION_TOOLKIT", line 255 ORA-06512: at line 9 +28232. 0000 - "invalid input length for obfuscation toolkit"+
I am using C++ OCI LIB, to insert some report data from remote OCI client to oracle 11 server. This data is read by another process to create the report.The DB CHARSET is UTF-8. But the report tool expects the data to be ISO08859-1 encoded. So while inserting the data into the database i specify the following LANG and CHARSET for my table colulmn in client:
The TARGET DB CHARSET is UTF-8 NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 size_t csid = 871; // UTF-8 OCIAtrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, *(void *)&csid*, (ub4) 0, (ub4)OCI_ATTR_CHARSET_ID, errhp);
This solution works for almost every case of ASCII and Extended ASCII Charest but we are facing issues if we have few specific characters to be inserted.f we are trying to insert single beta character [β] through client, the data goes empty to the column.
Beta Character details: DEC OCT HEX BIN Symbol Description 223 337 DF 11011111 ß Latin small letter sharp s - ess-zed
DB Output after insert single β: select rawtohex(NAME) from PERSONS where EID=333;
RAWTOHEX(NAME) ---------------------------
But if the string is *"ββ"* everything work fine: DB Output for "ββ": select rawtohex(NAME) from PERSONS where EID=333;
I am using Oracle 10g version and I have a database created and it contains around 9 tables and 25k records, as per my current requirement I need to add few more columns in the existing tables and those new columns are going to be filled with contents from multiple languages (as of now Chinese and English). I went through Oracle 10g globalization guide and I understood that, for my requirements I need to add my new columns with data types of either NVARCHAR or NCHAR.
With my about understanding I created the following table and tried to insert some Chinese characters as follows but it's now coming as expected and I'm getting inserted only "????" in my table columns
Find below the list of actions
>create table Employee(EmpId varchar(255), EmpName NCHAR(255)); >insert into Employee(EmpId, EmpName) values('280129','彭俊睦'); >select * from Employee; 280129 ¿¿¿
>insert into Employee(EmpId, EmpName) values('28018',N'彭俊睦'); >select * from Employee; 280129 ¿¿¿ 28018 ¿¿¿
When I run the “select * from v$nls_parameters;” query I’m getting the following data
NLS_LANGUAGE ---------à SIMPLIFIED CHINESE NLS_TERRITORY---------à AMERICA NLS_CURRENCY---------à $ NLS_ISO_CURRENCY---------à AMERICA NLS_NUMERIC_CHARACTERS---------à ., NLS_CALENDAR---------à GREGORIAN NLS_DATE_FORMAT---------à DD-MON-RR [Code] .....
We have production DB 10g with character set US7ASCII. This DB stores Arabic data and English data.Production DB located in HP unix Operating System.
When I query data from DB through SQL developer data is shown as Junk or Unknown characters(Square Boxes).
Client (Workstation from where query is issued from SQL develope- Windows XP OS) Settings: NLS_LANG = AMERICAN_AMERICA.US7ASCII
In Client workstation Oracle 10g client is installed from where I used to query data through SQL developer. The problem is I am unable to see Arabic characters in the sense that it is displayed as Junk character. However English characters and Eneglish numeric values are displayed properly.
I tried below way to make sure that data is not corrupted: Converted "Name" column to hex value (rawtohex) and displayed its HEX value. Executed below query in UTF-8 DB.
select UTL_I18N.RAW_TO_CHAR(hex_value_of-name) from dual;
This displayed Arabic name properly in UTF8 DB.
Character set for this production DB can not be changed at this time. There are many applications which is based on this DB. All these applications are well capable of converting Junk data to Arabic to display in application.
My concern is: What I should required to do to view Arabic data properly through SQL developer? Is there any settings needs to be done at my client workstation?
I am using oracle 10 g database on windows xp. I have backup of data contains data in local language (Marathi). I want read this data in oracle itself.Which character set need to choose?
IMPDP-ing a dump file that someone has handed me over into Oracle XE results in special characters, i.e. Umlauts, being messed up.
In a hex editor, the dump file shows a) the token WE8MSWIN1252 near the beginning, but b) Umlauts obviously being encoded in DOS 850, for example "König" is encoded as 4b 94(!) 6e 69 67. Does this prove that the dump file is badly formatted and that I have to resign myself to the complicated approach mentioned at the end of [URL]...
I was playing with different CHARACTERSET, but some special characters e.g. "greater than or equal" do not get loaded/displayed correctly in the database. Also tried changing NLS_LANG registry key and following some advices in the Oracle doc
I have a table. It's name is INSTITUTION. It has a NUMBER INS_ID and NVARCHAR2(50) INS_NAME . INS_NAME can contain Turkish characters, such as "ğ,ü,ş,ç,ö". According to business logic, there can not be a repetition on the INS_NAME.User will enter institution name from a textbox in ASP.NET , and I check this name in database from c sharp code, if there is no repetition, we will add this record.
The problem is; when user enter a instition name that contains Turkish character, there is a duplication. If there is a instition name is *"su işleri"* , the both query; SELECT * FROM INSTITUTION WHERE INS_NAME = *'su işleri'*; and SELECT * FROM INSTITUTION WHERE INS_NAME = *'su isleri'*; returns no result, even though there it is.But if instition name is "oracle corporation" (there is no Turkish character) it query successfully. I have the same problem in Toad for Oracle 11.5.1.2. When I query database from toad SELECT * FROM INSTITUTION, the phrase *"su işleri"* has appeared. But when I query SELECT * FROM INSTITUTION WHERE INS_NAME = *'su işleri'*; , there is again no result.When I connect oracle database directly and perform the query SELECT * FROM INSTITUTION , the phrase *"su isleri"* (not *"su işleri"* ) has appeared.
Here are the language settings of the database:
National Language Support National Language Parameter Value NLS_CALENDAR______________GREGORIAN NLS_CHARACTERSET__________WE8MSWIN1252 NLS_COMP__________________BINARY NLS_CURRENCY______________TL NLS_DATE_FORMAT__________DD/MM/RRRR NLS_DATE_LANGUAGE________TURKISH NLS_DUAL_CURRENCY_________YTL [code]....
How to avoid Junk character insertion in oracle table. I have prepared scripts like this Say
customer - info
After insertion the data is inserted like below in production
Customer ¿ info
We are using command prompt for script execution in production environment. I am using PLSQL developer and SQL developer for development. i cannot see junk data in PLSQL developer and latest SQL developer , but its caught in old version of SQL developer. Also in Application also i can able to figure out junk data.
I have a database in my local machine that doesn't support Turkish characters. My NLS_CHARACTERSET is WE8ISO8859P1, It must be changed to WE8ISO8859P9 , since it supports full Turkish characters. I would like to migrate character data using a full export and import and my strategy is as follows:
1- create a full export to a location in network,
2- create a new database in local machine that it's NLS_CHARACTERSET is WE8ISO8859P9 (I would like to change NLS_LANGUAGE and NLS_TERRITORY by the way)
3- and implement full import to newly created database. I 've implemented first step, but I couldn't implement the second step. I 've created the second step by using toad editor by clicking Create -> New Database but I can not connect the new database. I must connect new database in order to perform full import.
I want to convert my database characterset from WE8MSWIN1252 from any UNICODE, because i have to transportable tablespace to the destination, the destination is unicode and source is WE8MSWIN1252. While importing transportable tablespace i was not able to do because of this reason, so i want to convert lets say source from WE8MSWIN1252 to unicode.
SQL> SELECT * FROM NLS_DATABASE_PARAMETERS where parameter = 'NLS_CHARACTERSET'; PARAMETER VALUE ------------------------------ ---------------------------------- NLS_CHARACTERSET AL32UTF8 SQL>
There is table (VIN_TEMP) in my company database containing following records. It seems like this table should contain some greek language special chracter values instead of this weird data.
Client are reporting these records as invalid and requesting us to fix. As i investigated i found out, this table was created and loaded few year back. Client sent us one time files which we loaded into this table. I was able to find the code which was actually used to load this table, but unfortunately i was not able to find the raw files where we load this data from...
It seems like Previous Developer specified character set "UTF8" statement, in his sql loader script, to load this data. It seem those file contain some Greek language special character data which was not support by "UTF8" charater set and result in creating those invalid data. My Job is to fix these invalid records and convert them back to its original values which were present in the raw file. I tried to contact client and see if i can find out the raw files but no luck. I tried to use convert function as mention to convert this data from "UTF8" to our current character set format but no luck.
how do we know database character set is either single character set or multi character set?
While changing character-set from AL32UTF8 to WE8MSWIN1252 got "ORA-12712: new character set must be a superset of old character set".
Below are steps taken to resolve the issue -
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
i got this error: ORA-12712: new character set must be a superset of old character set
below are the commands executed by me:
SQL> SHUTDOWN IMMEDIATE; SQL> CONNECT SYS/password AS SYSDBA; SQL> STARTUP MOUNT; SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION; SQL> ALTER DATABASE OPEN; SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE WE8MSWIN1252; SQL> SHUTDOWN; SQL> STARTUP; SQL> QUIT;
And its working...
I have not done it in proper order. Neither have done ccsscan. Still, no user reported any issues. Do my changes truncated the data?
I have a question to write a pl/sql for Chinese zodiac but i cant do it, even i Google it i cant find any solution..the question is as below:
The Chinese zodiac associates birth years with the following; animals:
Birth Year Animal 1924,1936.1948,1960,1972,1984,1996 Rat 1925,1937,1949,1961,1973,1985,1997 Cow 1926,1938,1950,1962,1974,1986,1998 Tiger 1927,1939,1951,1963,1975,1987,1999 Rabbit 1928,1940,1952.1964,1976,1988,2000 Dragon 1929,1941,1953,1965,1977,1989,2001 Snake 1930,1942,1954,1966.1978,1990,2002 Horse 1931,1943,1955.1967,1979,1991,2003 Sheep 1932,1944,1956,1968,1980,1992,2004 Monkey 1933,1945,1957,1969,1981,1993,2005 Chicken 1934.1946,1958,1970,1982,1994,2006 Dog 1935,1947,1959,1971,1983,1995,2007 Pig
Write a command to declare a date variable named birth_date, and assign to it your birth date. Use an IF/ELS1F structure to test every year and determine the animal associated with your birth year. Then display your birth year and the associated animal name. For example, the program would display the following output for someone born in 1984:
I was born in 1984, which is the year of the Rat. Declare and use additional variables as needed.
I have some data in an Excel Sheet which is in Simplified Chinese, which I have to enter in oracle database. As soon as I paste the data into the insert statement on TOAD it gets converted into question marks.
Toad Version : 9.5.0.31 Oracle DBMS: 10.2.0.4.0 NLS_CHARACTERSET= UTF8
Chinese language is also installed on the instance with base language as US.
mentioned database is created with Character set = UTF8 and the National Character Set = AL16UTF16 and got the result for other languages (latin,german,french etc) but still Chinese language was not supported.
/* Formatted on 16/08/2012 21:55:39 (QP5 v5.215.12089.38647) */ CREATE OR REPLACE FUNCTION translator (p_words IN CLOB, -- words to be translated p_to IN VARCHAR2, -- language to translate to
I have a table named invoice_info(INV_NUMBER number,BILL_TO_NAME char(55),SHIP_TO_NAME char(55)) which is having chines characters in the BILL_TO_NAME & SHIP_TO_NAME fields.
If i am displaying this tables data on my sql developer it's showing data like some junk chars like ?????????¨¬???? for the fields BILL_TO_NAME & SHIP_TO_NAME .
How can i display them as chines chars on sql developer.
I have a table having data in different languages like English, Japanese and Chinese. I need to retrieve only those rows which are in Japanese. What all settings do I need to make. When am doing a normal select, rows in languages other than English are appearing as Junk data.
We are getting problem with the Chinese character set. My current character set is as follows.
PARAMETER VALUE ---------------------------------------------------------------- ---------------------------------------------------------------- NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., [code]....
My column description for the table product is as follows.
when trying to insert Chinese character using the insert command below
insert into product(part_nbr,part_desc,cust_name) values('322341',unistr('功'),'test');
I am getting the value when selecting the same record using the select command
select a.part_nbr,a.part_desc,a.cust_name from product a where a.part_nbr='322341'322341¿test
When I running this command on TOAD
select a.rowid,a.part_nbr,a.part_desc,a.cust_name from product a where a.part_nbr='322341'
and manually editing/inserting '功' character in output from select command above. After that I am able to get the same Chinese character when I am running select next time.
I have a strange problem with query with like and %.
When I run this script:
ALTER SESSION SET NLS_SORT = 'BINARY_CI'; ALTER SESSION SET NLS_COMP = 'LINGUISTIC'; -- SELECT * FROM NLS_SESSION_PARAMETERS; -- drop table test1; CREATE TABLE TEST1(K1 NVARCHAR2(80));
[code]....
When i change datatype to varchar2 this code work correct.
The execution plan:
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SQL_ID d3d64aupz4bb5, child number 2 ------------------------------------- select * from TEST1 where k1 like N'Ł%'
[code]....
Note - dynamic sampling used for this statement (level=2)
I have created a procedure which sends e-mail using UTL_SMTP. The procedure has a part in which we add the attachments to e-mail. Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
Oracle version Oracle Database 11g Release 11.2.0.1.0 - 64bit Production running on CentOS Linux release 6.0 (Final), kernel 2.6.32-71.29.1.el6.x86_64.
I am having a hard time spooling a file and displaying special Brazilian characters, even though I can see them correctly in SQLDeveloper: LEOPOLDO COUTO DE MAGALHÃES JÚNIOR
Spool: LEOPOLDO COUTO DE MAGALH?ES JUNIOR
I've tried changing the NLS_LANG at the session level, but that cannot be done. I don't want to change the default language of my DB, but really need these characters to display correctly in a file.