I am using oracle 10 g database on windows xp. I have backup of data contains data in local language (Marathi). I want read this data in oracle itself.Which character set need to choose?
I am using C++ OCI LIB, to insert some report data from remote OCI client to oracle 11 server. This data is read by another process to create the report.The DB CHARSET is UTF-8. But the report tool expects the data to be ISO08859-1 encoded. So while inserting the data into the database i specify the following LANG and CHARSET for my table colulmn in client:
The TARGET DB CHARSET is UTF-8 NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 size_t csid = 871; // UTF-8 OCIAtrSet((void *) bnd1p, (ub4) OCI_HTYPE_BIND, *(void *)&csid*, (ub4) 0, (ub4)OCI_ATTR_CHARSET_ID, errhp);
This solution works for almost every case of ASCII and Extended ASCII Charest but we are facing issues if we have few specific characters to be inserted.f we are trying to insert single beta character [β] through client, the data goes empty to the column.
Beta Character details: DEC OCT HEX BIN Symbol Description 223 337 DF 11011111 ß Latin small letter sharp s - ess-zed
DB Output after insert single β: select rawtohex(NAME) from PERSONS where EID=333;
RAWTOHEX(NAME) ---------------------------
But if the string is *"ββ"* everything work fine: DB Output for "ββ": select rawtohex(NAME) from PERSONS where EID=333;
We have production DB 10g with character set US7ASCII. This DB stores Arabic data and English data.Production DB located in HP unix Operating System.
When I query data from DB through SQL developer data is shown as Junk or Unknown characters(Square Boxes).
Client (Workstation from where query is issued from SQL develope- Windows XP OS) Settings: NLS_LANG = AMERICAN_AMERICA.US7ASCII
In Client workstation Oracle 10g client is installed from where I used to query data through SQL developer. The problem is I am unable to see Arabic characters in the sense that it is displayed as Junk character. However English characters and Eneglish numeric values are displayed properly.
I tried below way to make sure that data is not corrupted: Converted "Name" column to hex value (rawtohex) and displayed its HEX value. Executed below query in UTF-8 DB.
select UTL_I18N.RAW_TO_CHAR(hex_value_of-name) from dual;
This displayed Arabic name properly in UTF8 DB.
Character set for this production DB can not be changed at this time. There are many applications which is based on this DB. All these applications are well capable of converting Junk data to Arabic to display in application.
My concern is: What I should required to do to view Arabic data properly through SQL developer? Is there any settings needs to be done at my client workstation?
IMPDP-ing a dump file that someone has handed me over into Oracle XE results in special characters, i.e. Umlauts, being messed up.
In a hex editor, the dump file shows a) the token WE8MSWIN1252 near the beginning, but b) Umlauts obviously being encoded in DOS 850, for example "König" is encoded as 4b 94(!) 6e 69 67. Does this prove that the dump file is badly formatted and that I have to resign myself to the complicated approach mentioned at the end of [URL]...
I was playing with different CHARACTERSET, but some special characters e.g. "greater than or equal" do not get loaded/displayed correctly in the database. Also tried changing NLS_LANG registry key and following some advices in the Oracle doc
From the source database, the chinese characters are stored in some schema table. From the csscan result, there are convertiable, truncate, data lossy character. So, I have tried to use exp/imp for the conversion. However, all chinese characters are invalided and cannot be read anymore. How can I convert them from US7ASCCI to UTF8 database?
Also, I have tried build up another database with AMERICAN_AMERICA.ZHT16MSWIN950. The exp/imp is used for conversion again. The chinese characters are readable in AL32UTF8 database.
I have a table. It's name is INSTITUTION. It has a NUMBER INS_ID and NVARCHAR2(50) INS_NAME . INS_NAME can contain Turkish characters, such as "ğ,ü,ş,ç,ö". According to business logic, there can not be a repetition on the INS_NAME.User will enter institution name from a textbox in ASP.NET , and I check this name in database from c sharp code, if there is no repetition, we will add this record.
The problem is; when user enter a instition name that contains Turkish character, there is a duplication. If there is a instition name is *"su işleri"* , the both query; SELECT * FROM INSTITUTION WHERE INS_NAME = *'su işleri'*; and SELECT * FROM INSTITUTION WHERE INS_NAME = *'su isleri'*; returns no result, even though there it is.But if instition name is "oracle corporation" (there is no Turkish character) it query successfully. I have the same problem in Toad for Oracle 11.5.1.2. When I query database from toad SELECT * FROM INSTITUTION, the phrase *"su işleri"* has appeared. But when I query SELECT * FROM INSTITUTION WHERE INS_NAME = *'su işleri'*; , there is again no result.When I connect oracle database directly and perform the query SELECT * FROM INSTITUTION , the phrase *"su isleri"* (not *"su işleri"* ) has appeared.
Here are the language settings of the database:
National Language Support National Language Parameter Value NLS_CALENDAR______________GREGORIAN NLS_CHARACTERSET__________WE8MSWIN1252 NLS_COMP__________________BINARY NLS_CURRENCY______________TL NLS_DATE_FORMAT__________DD/MM/RRRR NLS_DATE_LANGUAGE________TURKISH NLS_DUAL_CURRENCY_________YTL [code]....
How to avoid Junk character insertion in oracle table. I have prepared scripts like this Say
customer - info
After insertion the data is inserted like below in production
Customer ¿ info
We are using command prompt for script execution in production environment. I am using PLSQL developer and SQL developer for development. i cannot see junk data in PLSQL developer and latest SQL developer , but its caught in old version of SQL developer. Also in Application also i can able to figure out junk data.
We have an existing db (10.2.0.4.0) and forms (11.1.2.1.0) application, that we're trying to extend to support Chinese characters. We're looking to add some unicode (nvarchar2) columns to existing tables, rather than converting the whole db charset. I've pasted my environment settings below. What I've found so far in trying to create a local (ie. running the form in Builder with local weblogic running) test form, is that I can insert the chars ok (using plsql developer) and the test form can display them correctly, but cannot write them back to the database. They appear as upside down question marks in any records the form has created.
So, how to get the form to write the characters back into the database correctly? 2) The chinese chars will only be relevant to a few forms inside the app, are there any settings local to the form that will enable unicode support, rather than setting at OS level. ie, an alter session, or equivalent? 3) Oracle Reprts doesn't appear to have an nchar datatype unlike Forms, is there anyway to get Reports (generating PDFs), to include Chinese?
I now have the chars writing back to the db ok. If you do it via an INSERT statement from inside the form, it doesn't work. It appears the value is sent to the db in the normal charset rather than the national charset, and it's written as a question mark. If you pass the value from the form into a back end stored proc though (which does the insert) it works okay.
I have a database in my local machine that doesn't support Turkish characters. My NLS_CHARACTERSET is WE8ISO8859P1, It must be changed to WE8ISO8859P9 , since it supports full Turkish characters. I would like to migrate character data using a full export and import and my strategy is as follows:
1- create a full export to a location in network,
2- create a new database in local machine that it's NLS_CHARACTERSET is WE8ISO8859P9 (I would like to change NLS_LANGUAGE and NLS_TERRITORY by the way)
3- and implement full import to newly created database. I 've implemented first step, but I couldn't implement the second step. I 've created the second step by using toad editor by clicking Create -> New Database but I can not connect the new database. I must connect new database in order to perform full import.
SNRCTL> stat LISTENER Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.1.0 - Production Start Date 19-JAN-2013 00:50:10 Uptime 0 days 0 hr. 29 min. 51 sec Trace Level off Security ON: Local OS Authentication [code]....
In every oracle documentation for e.g:11.2 Scan and Node TNS Listener Setup Examples [ID 1070607.1] we found the local listener status showing both local-ip and vip. Why is not showing in our case?
how do we know database character set is either single character set or multi character set?
While changing character-set from AL32UTF8 to WE8MSWIN1252 got "ORA-12712: new character set must be a superset of old character set".
Below are steps taken to resolve the issue -
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
i got this error: ORA-12712: new character set must be a superset of old character set
below are the commands executed by me:
SQL> SHUTDOWN IMMEDIATE; SQL> CONNECT SYS/password AS SYSDBA; SQL> STARTUP MOUNT; SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION; SQL> ALTER DATABASE OPEN; SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE WE8MSWIN1252; SQL> SHUTDOWN; SQL> STARTUP; SQL> QUIT;
And its working...
I have not done it in proper order. Neither have done ccsscan. Still, no user reported any issues. Do my changes truncated the data?
I have a strange problem with query with like and %.
When I run this script:
ALTER SESSION SET NLS_SORT = 'BINARY_CI'; ALTER SESSION SET NLS_COMP = 'LINGUISTIC'; -- SELECT * FROM NLS_SESSION_PARAMETERS; -- drop table test1; CREATE TABLE TEST1(K1 NVARCHAR2(80));
[code]....
When i change datatype to varchar2 this code work correct.
The execution plan:
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SQL_ID d3d64aupz4bb5, child number 2 ------------------------------------- select * from TEST1 where k1 like N'Ł%'
[code]....
Note - dynamic sampling used for this statement (level=2)
I have created a procedure which sends e-mail using UTL_SMTP. The procedure has a part in which we add the attachments to e-mail. Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
Oracle version Oracle Database 11g Release 11.2.0.1.0 - 64bit Production running on CentOS Linux release 6.0 (Final), kernel 2.6.32-71.29.1.el6.x86_64.
I am having a hard time spooling a file and displaying special Brazilian characters, even though I can see them correctly in SQLDeveloper: LEOPOLDO COUTO DE MAGALHÃES JÚNIOR
Spool: LEOPOLDO COUTO DE MAGALH?ES JUNIOR
I've tried changing the NLS_LANG at the session level, but that cannot be done. I don't want to change the default language of my DB, but really need these characters to display correctly in a file.
I read and tried a lot, but I can't consume a UTF-8-Webservice correct in a 11gR2-DB: German "Umlaute" like ü are displayed/written as ü.
The service sends all correct. (tested with soap-ui 4.5.1)
The DB-NLS-Parameters are: SQL> select * from NLS_DATABASE_PARAMETERS; PARAMETER VALUE ------------------------------ -------------------------- NLS_LANGUAGE GERMAN NLS_TERRITORY GERMANY
[code]...
I consume Service with that skript:
DECLARE l_http_request UTL_HTTP.req; l_http_response UTL_HTTP.resp; l_buffer_size NUMBER (10) := 512; l_line_size NUMBER (10) := 50; l_lines_count NUMBER (10) := 20;
[code]....
What can I do, to get the response correct in NLS_CHARACTERSET WE8MSWIN1252.I think, it must be possible to convert the response, but I didn't found the correct solution.And I don't understand, why the body_charset is ever "ISO-8859-1" (commented in procedure). Settings utl_http.body_charset is without effect.
I have Arabic data stored into below two encodings in oracle AL32UTF8 database
1 Million rows into WE8MSWIN1252 .5 million rows into AR8MSWIN1256
in all cases I like to convert 1 Million row of WE8MSWIN1252 into AR8MSWIN1256. I could convert the data encoding from 1252 to 1256 using SQLdeveloper. But no luck using oracle export/import utility (both exp and expdp)…. I’m thinking may be certain locale is required for export/import to work.
Also my company said SQL developer is free utility may not be supported by oracle so use export and import for this, I need to convert only one table.
I'm trying to insert a character from the extended ascii character set. Specifically, there's a company that has an accented e (�) in the name. Right now, the company name doesn't have the e at all, accent or no accent. So I'm trying to do an update, something like
update table1 set company_name='blah�" where company='blah'
It runs, but doesn't do the update. Even when I try to forcefully do an insert (instead of an update) I get nowhere; the accented is simply dropped. So the basic question is, how do you insert extended ascii characters into oracle?
cookiemonster wrote on Wed, 20 March 2013 22:01Or if the user does anything to wipe the modified data - clear block, enter query etc - forms will also raise the alert.
i've uploaded a new RTF for Spanish but wen i login with the Spanish creds and run the program. Its not showing the changes i've chosen the right language and territory code.
I have an Oracle DB's charset configured for Korean language and things in there are English and Korean and they are working fine. If i try to import a txt file with Chinese characters into the DB. It doesn't show correctly on query.
Is there any way I can solve this ? Since I find that there are many charset / NLS language options througout DB. I read from others that they have the following response. Any alternative beside changing the whole DB charset ?
Recreate the database with the correct character set Convert the database to AL32UTF8. Documentation link Change the databases National Character Set to AL32UTF8 & change your application to use NVARCHAR2()/NCHAR() datatypes instead of VARCHAR2() (documented at the same link as above)