I have table with Blob column. In the blog column I have XML data (see below). I wanted to parse xml and get the value of note '<GetTest>'. How do I achieve that? Sample Table:
create table SAMPLE_XML_TABLE ( ID NUMBER not null, XMLSTRING BLOB, ); <?xml version="1.0" standalone="no"?> <FromResponse xmlns="http://www.somewebsite.com/FromResponse"> <Code>0000</Code> [code].........
How to extract the data from XML using the xsd file. attached files.
Explanation: first check the EmailMessage tage from order_conf.xml compared with Email.xml(<xsd:element name="EmailMessage">) if exists then go to next node. EmailMessage(exists tag in order xml file) ->next <ns1:emailNotificationype> this tag should be follow under the EmailMessage tag(<xsd:element ref="emailNotificationype">) in Email.xml ->next <ns1:orderNotification> -> check this tag in <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow under <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow these tages <xsd:element name="templateFormatInfo"> <xsd:element ref="templatecode"/> <xsd:element ref="templateversion"/>
I have a requirement to extract the data from a table using the UTL file utilities.
My problem is, Say i have a table t1 with column C1,C2, C3, C4, C5. This table t1 gets loaded everyday. i need to pickup the data only that which has changed/inserted in the last load. How can i achieve this ? There is no timestamp in this table.
I'm working on trying to write a shell script to parse a parameter file, but at the same time I want to be able to overwrite the parameter file settings with other command line settings. For instance if my par file had export/import settings for the username, password, schema, etc;
I wanted to run the same export/import with those settings for a different schema. I want to be able to put the schema=<different_than_par_file> after the parfile=<parfile.par and have the parfile be read and applied for everything except the different schema.
Right now I'm storing the cmd line and parsing it again looking for other parameters besides the parfile.
I have a column which holds the data in the below format.
Source Data : >SNO_SDSDQ-8192-BN>SNO_54-99-24120-8192 >SNO_SDSDQ-8192-BN>SNO_54-99-24120-8192>SNO_54-90-16489-008G >SNO_SDPMDB-008G-11>SNO_54-90-18008-008G>SNO_54-62-08791-008G>SNO_20-81-00327 >SNO_SDPMDB-008G-12>SNO_54-90-17830-008G>SNO_54-62-08598-008G>SNO_20-81-00327
[code]..
Problem Statement : Split the above data into individual components and create columns / aliases dynamically.If the column is present then place the data under the created column.
Example 1) : >SNO_SDSDQ-8192-BN>SNO_54-99-24120-8192
Result Column Name : SKUCol_54-99 Data :SNO_SDSDQ-8192-BNSNO_54-99-24120-8192
Example 2) >SNO_SDSDAA-002G-101-J>SNO_54-90-16002-002G>SNO_54-62-05781-002G>SNO_20-81-00135-5
Column Name : SKU54-90Col_54_62Col_20_81 Data :SNO_SDSDAA-002G-101-JSNO_54-90-16002-002GSNO_54-62-05781-002GSNO_20-81-00135-5
Column Name can be derieved from the components like SNO_54-62-05781-002G
i.e. SNO_54-62-05781-002G ==> SNO_-05781-002G "54-62" will give the Col_Name
The First Column will always have data starting with SNO_SD% This column is constant and will be named as SKU.
I need this data to be placed in the same record / row but under different columns as per the data set.Basically, Can the data be split into multiple parts based on delimeters and the columns are created based on the unique data in the parts that form the data in the column.
I have data in multiple oracle tables. I have to create a extract flat file after applying some validation and business logic on it and store it in unix server with naming convention FF_RMS_SC_<<YYYYMMDDhhmm>>.txt.This job will be scheduled to run daily to create the flat file. I guess pl/sql and unix needs to be used.
I have data in multiple oracle tables. I have to create a extract flat file after applying some validation and business logic on it and store it in unix server with naming convention FF_RMS_SC_<<YYYYMMDDhhmm>>.txt.This job will be scheduled to run daily to create the flat file.
A big table of size more than 4 GB from 10g DB needed to be extracted/exported into a text file,the column delimiter is "&|" and row delimiter is "$#".I cannot do it from TOAD as it is hanging while extraction of big table.
Upgrading from 10.1.0.2 to 10.1.0.5. Enterprise Manager requires 'newest' version of Oracle JDBC drive.Downloaded what I believe to the correct file (classes12.jar). I'm unclear what to do with this, my readings have pointed me in the following direction:
1) copy to c:oracleproduct10.1.0db_1jre1.4.1in 2) extract
here is the problem...tried:
1) just clicking on it (nothing)
2) c:program filesjavajre1.6.0_03injavaw -jar classes12.jar Error: Failed to load Main-Class manfest atrribute from c:oracleproduct10.1.0db_1jre1.4.1inclasses12.jar
Is my location correct, I've been hunting everywhere..making no progress.
I have a Bash script that counts the rows of a csv file, extracts the fields and makes inserts in a sql file. Then it logs into SqlPlus and calls the insert file. The sql file looks like this:
I rely on "WHENEVER SQLERROR EXIT" for things to go the right path. However sometimes because of the contents of the CVS files (which I can't control) some rows don't get inserted but SqlPlus doesn't see that as an error, doesn't exit and I end up with the wrong number of rows being informed in the second insert.Is there some kind of "if-then-else" construct in Sql? After all the inserts are made, do a "select count (*)" and compare that number to the one informed by the script. If they match, make the final insert and commit; else exit.
I'm a SAP consultant working in SQL on NT platforms. This is the first conversion from Oracle that I have done. My client has provided us with a "Cold" backup of the Oracle dbase on a HD formatted in Unix, I have the partition mounted and I'm able to view the files. I have the ORDATA folder with all the .DBF files.
Q: How do I extract the data from the .DBF files. I need to export to something workable with SQL.
Original database was on Unix, I'm operating on Windows platform.
Im having a problem with writing an appropriate query for a report in my web application. I need it to extract data from three related tables:
CAR( PK CAR_ID INT NOT NULL, TYPE VARCHAR NOT NULL) REPAIR_CENTER( PK REPAIR_CENTER_ID INT NOT NULL, NAME VARCHAR NOT NULL) [code]...
I need the report to display only available cars. Available cars must have these characteristics:
1. if the CAR_REPAIR table is empty, displays all entries from CAR table... 2. if car has multiple entries in the CAR_REPAIR table display only the latest DATE_RETURN if its lower than todays date (SYSDATE), otherwise don't display that car... 3. don't display cars that are in the CAR_REPAIR table and have DATE_RETURN value of NULL
Is there a way where i can extract the data from Oracle Express 6 (OLAP data) into an excel or any other format, so that the same can be loaded onto a MS Sql or a normal Oracle database.
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
I have a table which have two columns date on hourly basis and response time. I want to pull the previous date's data on hourly basis with the corresponding response time. The data will be loaded to the table every midnight.
eg: Today's date 23/10/2012 I want to pull data from 22/10/12 00 to 22/10/12 23
The below query is pulling the date as required but I am not able to pull the response time.
with a as (select min(trunc(lhour)) as mindate, max(trunc(lhour)) as maxdate from AVG_HR) SELECT to_char(maxdate + (level/25), 'dd/mm/yyyy hh24') as dates FROM a CONNECT BY LEVEL <= (1)*24 ;
select t1.c1 as gr1, t2.c1 as gr2, t1.c2 from test_data t1,test_data t2 where t1.c1<>t2.c1 and t1.c2=t2.c2 and (select count(*) from test_data t3 where t3.c1=t1.c1)= (select count(*) from test_data t4 where t4.c1=t2.c1) order by 1 asc, 2 asc
but I don't find the way to refilter to group the data as expected. The idea is find subsets and show the set of data and values in column c1.
Is it possible for Access to extract data from an Oracle database and upload it directly?
Currently we have a business process where data is being extracted in scheduled queries (30+) to Excel spreadsheets, then manually edited to remove heading lines and imported to an Access database. I see an opportunity to automate a time consuming manual activity by having the Access db extract the data and directly upload it.