SQL & PL/SQL :: Utf-8 - Japanese Chars Are Coming?
Aug 25, 2013
We have to get some data from U98 (saixdbU98) in UTF-8 format in Excel sheet.We are having queries ready for this. These queries bring data which have Japanese characters.But when we run the Select queries then in the result Japanese chars are garbled. It comes in ????.
let us know what process we have to follow to get the result in UTF-8 format in Excel sheet.here is the code
set sqlblanklines on;
SET DEFINE OFF;
set linesize 1000
set long 1000000
We have to get some data from U98 (saixdbU98) in UTF-8 format in Excel sheet.We are having queries ready for this. These queries bring data which have Japanese characters.
But when we run the Select queries through pbrun (Power Broker) then in result Japanese chars are garbled. It comes in what process we have to follow to get the result in UTF-8 format in Excel sheet.
here is the code -- set sqlblanklines on; SET DEFINE OFF; set linesize 1000 set long 1000000 set pages 50000
I have a XML file which contains Japanese characters and it is parsed by a UNIX script and nawk utility and writes the data to a flat file separated by delimiter. I could the Japanese characters proper in the flat file.
I use SQLoader (within a unix script) to import this data from the flat file to Oracle database. If I view the data in the database through Toad, the Japanese characters are showing differently (not as in XML or in flat file).
But if do a export for the particular table to a flat file through Toad, I can see the Japanese characters proper in the exported flat file.
(Note : I have set the env variables NLS_LANG=Japanese_Japan.JA16SJIS, LC_CTYPE="en_CA.UTF-8" in both XML parser and the loader script)
Why I couln't see the Japanese characters while viewing through Toad.
I have a table having data in different languages like English, Japanese and Chinese. I need to retrieve only those rows which are in Japanese. What all settings do I need to make. When am doing a normal select, rows in languages other than English are appearing as Junk data.
I have TOAD Version 10.5.1.3, and i want to see the Japanese font, for that i tried changing view->toad options->display->Fonts to "Arial Unicode MS" and selected Script as "Japanese" and click on Apply and OK. But the Script option is automatically changed to "Western" ( not to "Japanese").
I have many different file names within my table and I want to remove the .TXT extension from each one. I want to try this SQL but being a newbie in Oracle, I don't know how to say "Left" characters. "Left" is an invalid identifier.
Update TableName Set File_Name = Left(File_Name, Len(File_Name)-4) Where File_Name LIKE '%.TXT'
I'm using utl_encode.mimeheader_encode to encode the subject line for non ascii chars, but its encoding only first 75 chars, is there any other way to encode subject.
TRUNCATE INTO TABLE NEW_DATA FIELDS TERMINATED BY '|' (INK_DATE date "YYYY-MM-DD", INV_ID, CUST_ID, AMOUNT)
My problem is that there are some rows (about 1%) where the columns INV_ID, CUST_ID, AMOUNT are containing non numeric characters and they end up in the BAD-file as errors.
Is there a way to make them end up in the discard file instead so I don't end up with errors but discards?Or even better load then into another table looking like this
i tried to apply a sql case statement in sql loader control file in " " the load succeed with 258 chars case or decode statement but when i add more cases it return sql loader 350 token longer than max
LOAD DATA INFILE 'F:Vouvou20110613_102_951454.unl' BADFILE 'F:Vouvou.pad' DISCARDFILE 'F:Vouvou.dic' replace INTO TABLE vou_test_2 FIELDS TERMINATED BY '|' TRAILING NULLCOLS [code].......
the above one succeed when i edit the case statement as follow
ACCOUNT_2001 "CASE WHEN:ACCOUNTTYPE1='2001'THEN :REWARDAMOUNT1 WHEN :ACCOUNTTYPE2='2001' THEN :REWARDAMOUNT2 WHEN :ACCOUNTTYPE3='2001' THEN :REWARDAMOUNT3 WHEN :ACCOUNTTYPE4='2001' THEN :REWARDAMOUNT4 WHEN :ACCOUNTTYPE5='2001' THEN :REWARDAMOUNT5 WHEN :ACCOUNTTYPE6='2001' THEN :REWARDAMOUNT6 WHEN :ACCOUNTTYPE7='2001' THEN :REWARDAMOUNT7 WHEN :ACCOUNTTYPE8='2001' THEN :REWARDAMOUNT8 WHEN :ACCOUNTTYPE9='2001' THEN :REWARDAMOUNT9 WHEN :ACCOUNTTYPE10='2001' THEN :REWARDAMOUNT10 END"
i got the Error sql loader 350 token longer than max allowable length of 258 chars .
note : i cant modify the table structure to shorten the column names .
I am using following query to get last 3 month ends. Here If month end falls on weekend(sat/sun) then it should have prior date
Where COB- Table name COB_DTE_ID_C- Date in numeric format (YYYYMMDD) COB_DAY_N- Days COB_MO_C- Month Number
I am not getting how to modify below query without much case when statements so that if I pass monthend date (20110531), It should give me
20110228 20110331 20110429
Its giving me
20110331 20110429 20110530
select * from ( select DISTINCT MAX (COB_DTE_ID_C) OVER (PARTITION BY COB_MO_C) from COB where COB_DTE_ID_C < 20110531 and COB_DTE_ID_C > to_char(add_months(to_date(20110531,'YYYYMMDD'),-3),'YYYYMMDD') and upper(COB_DAY_N) NOT IN ('SATURDAY','SUNDAY') ORDER BY 1 ) where rownum < 4
The thing is i am generating a report in Excel through forms and excel template in 10g. After generating the Excel trough my code and Macros, i.e save excel, close excel and open excel.
After the excel report which got generated is opened the focus is not coming back to form, it keep on going busy and busy I tried
form_exit, go_block and also Timer:DECLARE timer_id Timer; one_minute NUMBER(5) := 90000; BEGIN timer_id := CREATE_TIMER('Motor_Timer', one_minute, NO_REPEAT); END; etc and etc.
When i am using exit_form, the focus is coming back but the excel is not opening again.
For a couple of employees created in our system, the workflow display name is coming with 'AMERICAN' at the end which clearly looks like is the language setting for the user. ORIG_SYSTEM for this users in WF_LOCAL_ROLES is WF_LOCAL_USERS.
There is no option called WF_LOCAL_USERS in the Synchroinze WF Local Roles program.
I have a list item with two values,0 and 1. However when i click on the list item scroll bar appears. My tester is saying for two items she doesn't need a scroll bar.
I am using Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production version
I am having the data in following table -
drop table stud_fact; create table stud_fact(stud_NM, LVL_CD,ST_DT_DIM_KEY,OVRNK) as select 'ABG Sundal','H','20110630','175' from dual union all select
I have a form which has a Number field. I set it's Format Mask property to 9,99,99,99,99,999 so that I can format it comma separated and to restrict the user to enter decimal values. If the user enter any format other than this for example any decimal values, it is showing the message FRM-40209:Field must be of the form 9,99,99,99,99,999 which is fine. But when the user enters a decimal value and try to save the form it pop up this message several times.
I tried to catch this error in ON-ERROR trigger and display a message. But this is also pop up several times. I tried to raise form_trigger_failure also. This is also not working. I want this error message to come only once.
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
We have standalone database running on ASM. Its 11Gr2 linux version5 server. After the Database bounce, the DB isnt coming up and is showing the below error.
SQL> startup nomount ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+DATA/test/spfiletest.ora' ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15056: additional error message ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15001: diskgroup "DATA" does not exist or is not mounted ORA-00450: background process 'ASMB' did not start ORA-00443: background process "ASMB" did not start ORA-06512: at line 4
Also i checked the ASM disk groups. I can see all those are MOUNTED properly. In fact i could also see the spfile present in ASM disk physically. It looks like it couldn't identify the spfile to start up the db. however i could see it physically present in ASM disk group. Find below snapshot.
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 358400 329103 0 329103 0 N DATA/ MOUNTED EXTERN N 512 4096 4194304 358368 358288 0 358288 0 N FRA/ MOUNTED EXTERN N 512 4096 4194304 20480 18780 0 18780 0 N REDO/
i take export of one table (export complet successfully without warnings) when i am going to import into prduction databae the data in the table no coming i past the table structure and import command and logfile for import.
CREATE TABLE t_id_rac_ra_header (ra_company VARCHAR2(10) NOT NULL, ra_key NUMBER NOT NULL, ra_doc_type VARCHAR2(50) NOT NULL, ra_doc_number VARCHAR2(25) NOT NULL, ra_doc_date DATE DEFAULT SYSDATE NOT NULL, ra_reserve_key NUMBER,
I have a script which connects to Oracle and return 2 date value. few days back the database was down and when the script execute and return same garbage value in the spool file.
I added a exception handler WHENEVER SQLERROR EXIT 1 befor the connection start and after connection start. But in both the case it is also writing to the spool file.
I would like to exit the connection with a value <> 0 if any connection issue happens like (user id, password, database down error type of issue.) No information should go to the spool file
We have quite a number of sessions in database MES (production) coming from another machine.
From v$session, the program is oracle@WID27 (TNS V1-V3). This WID27 (hostname) consists of quite a number of development databases inside. We have to trace which jobs are actually triggering this, as WID27 are not suppose to connect to production databases.
How can we tell whether the sessions came in is from dblink or from the machine itself?
Customer is sending data from legacy system (Source) with the web service which in turn calls a package lying on Oracle server (Target). Now this package is simply inserting data passed by legacy system into master staging table in Oracle database. When they started this process in Sept 2011 then 4 lack records were inserted into staging table. In Oct 11 it was 0 records Nov 11 it was 2 lack records, Dec 11 it was 1 lack records, in Jan 12 it was 1 lac records, Feb 12 73k records, Mar 12 0 records, Apr 12 52k records.
As we see that number of records inserted in the table got reduced with time.. what should be the starting point here since web service is calling that package on the fly, how can i enable trace for that package? I cannot replicate this is Dev as this process is only working in PROD.
I have developed one rdf in Text Output Format.In this some special characters is coming for Text Output format of rdf.Shall i do any adjustments in layout? How to remove these special characters?
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE: CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT ( CALENDAR_KEY INTEGER NOT NULL, DAY_TIME_KEY INTEGER NOT NULL, SITE_KEY NUMBER NOT NULL, RESERVATION_AGENT_KEY INTEGER NOT NULL, LOSS_CODE VARCHAR2(30) NOT NULL, PROP_ID VARCHAR2(5) NOT NULL, [code].....