I'm a SAP consultant working in SQL on NT platforms. This is the first conversion from Oracle that I have done. My client has provided us with a "Cold" backup of the Oracle dbase on a HD formatted in Unix, I have the partition mounted and I'm able to view the files. I have the ORDATA folder with all the .DBF files.
Q: How do I extract the data from the .DBF files. I need to export to something workable with SQL.
Original database was on Unix, I'm operating on Windows platform.
Have 2 small tasks for regexp..where we need to extract numbers from the text & multiply & get the result.
Input field varchar2 : 5x a day x 10 days Output : 5 * 10 = 50 Select regexp_replace('5x a day x 10 days', '[^[:digit:]]' from dual;
The code extracts numbers ..but doesn't multiply & give the result.Have one more scenario as well.
Input field varchar2 : take 2 tablets (800 mg) by oral route every 4 hours while awake for 10 days Output : 2 * 800 * 4 * 6 * 10 = 384000 Select regexp_replace('take 2 tablets (800 mg) by oral route every 4 hours while awake for 10 days', '[^[:digit:]]' from dual;
For the above code, if it's hours ..we need to convert into day by multiplying with the required factor to make it a day.
I need write a query based on a bunch of user supplied IDs. The IDs will be pasted as plain text, one per row, by end user in a memo field in the reporting environment, and I need to do something like this:
SELECT PHONE_NUMBER FROM TELEPHONE WHERE ID IN('MEMO_ID1', 'MEMO_ID2',.....)
Reporting environment does not provide any tools to automatically convert plain text into IDs.
I am only able to extract only 4000 characters from the clob column "DESCRIPTION".how to get more characters or max for that column with the same query concept?
Is it possible for Access to extract data from an Oracle database and upload it directly?
Currently we have a business process where data is being extracted in scheduled queries (30+) to Excel spreadsheets, then manually edited to remove heading lines and imported to an Access database. I see an opportunity to automate a time consuming manual activity by having the Access db extract the data and directly upload it.
In our application, we are allowing user to upload data using excel sheet in UI. We are using PHP script in UI and using SQL Loader to load data from excel sheet to temp_table.
The temp_table has a primary key.
Here my question is , Is there any way to put some batch id for every upload in that table in automatic way ? so that we can easily extract the data by using batch id . we are using Oracle 11g.
I have set up a cross platform (Microsoft Windows IA (32-bit) -> Linux x86 64-bit) data guard and it worked fine.Then I did a switch over (which again worked) and found out the data is not getting replicated at all.. checked the data files available from the new primary database and found out they are in the windows format as below..
SQL> select name from v$datafile;
NAME -------------------------------------------------------------------------------- D:ORACLEAPPADMINISTRATORORADATAMFSSYSTEM01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSSYSAUX01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUNDOTBS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUSERS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
and physically they were created at '/home/app/oracle/product/11.2.0/db_1/dbs/' and as
I have a rather complicated process to import text files into my DB.I'm given thousands of files every day, separated by "," and with 80 fields each. With a bash script, I take the 45 fields I need and then split each file into x number of files grouping the rows by three fields.Then I use SQL Loader to insert them into de DB.
The problem is that now I must insert on two tables and the "WHEN" clause doesn't allow the use of > and <.
To make things a litle clearer take this text file (already splited and grouped and ready to be inserted): ... 1,1,135,1900,0,12,114,2011/08/25 17:19:00,135,... 1,1,135,1900,0,13,119,2011/08/25 17:19:00,136,... 1,1,135,1900,0,14,117,2011/08/25 17:19:00,137,... 1,1,135,1900,0,15,113,2011/08/25 17:19:00,138,... 1,1,135,1900,0,16,119,2011/08/25 17:19:00,139,... ... When field 6 is higher or equal to 14, it must go to table a.When field 6 is lower than 14, it must go to table b.I can't use external tables as I'm in a different server.
I am migrating data from a Solid Database to Oracle, I am using Flat Files to do that.
1.- I download the data to flat files from Solid 2.- I move the files to Oracle server 3.- I upload the data to Oracle
Now, I have done the 90% of the data base, but I have found some tables that has description columns and in this description the users writes enters, so when I try to upload the data to Oracle SQL loader cannot recognize this characters.
Example:
'25','0.','5.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '26','0.','2.','0.','0.','0.','0.','3.','0.','0.','0.','0.','0.','','' '27','0.','1.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '28','0.','1.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '29','0.','38.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '30','0.','13.','0.','0.','0.','0.','0.','6.','0.','6.','0.','0.','|SE RECHAZA B20CS50SNW ^M ^M SE RECHAZAN CINCO PZAS ^M DOS MOD. HSC15I41EH,DOS MOD. HSK15I41EH |Agregó: 06/06/2009 12:22:50 |','DEV. A PROV.' '31','0.','50.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '32','0.','9.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','','' '33','0.','2.','0.','0.','0.','0.','0.','0.','0.','0.','0.','0.','',''
I am on Oracle 11.2.0.3 on Linux and have implemented Oracle Text.I created Oracle Text indexes with default setting. However in an oracle white paper I read that the default setting may not be right. Here is the excerpt from the white paper by Roger Ford:URL....(Part of this white paper below....)Index Memory.
As mentioned above, cached $I entries are flushed to disk each time the indexing memory is exhausted. The default index memory at installation is a mere 12MB, which is very low. Users can specify up to 50MB at index creation time, but this is still pretty low. This would be done by a CREATE INDEX statement something like: CREATE INDEX myindex ON mytable(mycol) INDEXTYPE IS ctxsys.context PARAMETERS ('index memory 50M'); Allow index memory settings above 50MB, the CTXSYS user must first increase the value of the MAX_INDEX_MEMORY parameter, like this: begin ctx_ adm. set_ parameter('max_index_memory', '500M'); end; The setting for index memory should never be so high as to cause paging, as this will have a serious effect on indexing speed. On smaller dedicated systems, it is sometimes advantageous to temporarily decrease the amount of memory consumed by the Oracle SGA (for example by decreasing DB_CACHE_SIZE and/or SHARED_POOL_SIZE) during the index creation process.
Once the index has been created, the SGA size can be increased again to improve query performance." (End here from the white paper excerpt)My question is:
1) To apply this procedure (ctx_adm.set_parameter) required me to login as CTXSYS user. Is that right? or can it be avoided and be done from the application schema? This user CTXSYS is locked by default and I had to unlock it. Is that ok to do in production?
2) What is the value that I should use for the max_index_memory should it be 500 mb - my SGA is 2 GB in Dev/ QA and 3GB in production. Also in the index creation what is the value I should set for index memory parameter - I had left that at default but how should I change now? Should it be 50MB as shown in example above?
3) The white paper also refer to rebuilding an index at some interval like once in a month: ALTER INDEX DR$index_name$X REBUILD ONLINE; We are on Oracle 11g and the white paper was written in 2003.
I'm new to Oracle Text. I want to implement search for the unique ids. Like google search when the user start typing 123 it need to brings anything starting with 123 and has show like entries how google will shows. When I add number 4 to like 1234 then it has bring numbers starting with 1234.
Im having a problem with writing an appropriate query for a report in my web application. I need it to extract data from three related tables:
CAR( PK CAR_ID INT NOT NULL, TYPE VARCHAR NOT NULL) REPAIR_CENTER( PK REPAIR_CENTER_ID INT NOT NULL, NAME VARCHAR NOT NULL) [code]...
I need the report to display only available cars. Available cars must have these characteristics:
1. if the CAR_REPAIR table is empty, displays all entries from CAR table... 2. if car has multiple entries in the CAR_REPAIR table display only the latest DATE_RETURN if its lower than todays date (SYSDATE), otherwise don't display that car... 3. don't display cars that are in the CAR_REPAIR table and have DATE_RETURN value of NULL
Is there a way where i can extract the data from Oracle Express 6 (OLAP data) into an excel or any other format, so that the same can be loaded onto a MS Sql or a normal Oracle database.
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
How to extract the data from XML using the xsd file. attached files.
Explanation: first check the EmailMessage tage from order_conf.xml compared with Email.xml(<xsd:element name="EmailMessage">) if exists then go to next node. EmailMessage(exists tag in order xml file) ->next <ns1:emailNotificationype> this tag should be follow under the EmailMessage tag(<xsd:element ref="emailNotificationype">) in Email.xml ->next <ns1:orderNotification> -> check this tag in <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow under <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow these tages <xsd:element name="templateFormatInfo"> <xsd:element ref="templatecode"/> <xsd:element ref="templateversion"/>
I have a table which have two columns date on hourly basis and response time. I want to pull the previous date's data on hourly basis with the corresponding response time. The data will be loaded to the table every midnight.
eg: Today's date 23/10/2012 I want to pull data from 22/10/12 00 to 22/10/12 23
The below query is pulling the date as required but I am not able to pull the response time.
with a as (select min(trunc(lhour)) as mindate, max(trunc(lhour)) as maxdate from AVG_HR) SELECT to_char(maxdate + (level/25), 'dd/mm/yyyy hh24') as dates FROM a CONNECT BY LEVEL <= (1)*24 ;
I have a requirement to extract the data from a table using the UTL file utilities.
My problem is, Say i have a table t1 with column C1,C2, C3, C4, C5. This table t1 gets loaded everyday. i need to pickup the data only that which has changed/inserted in the last load. How can i achieve this ? There is no timestamp in this table.
select t1.c1 as gr1, t2.c1 as gr2, t1.c2 from test_data t1,test_data t2 where t1.c1<>t2.c1 and t1.c2=t2.c2 and (select count(*) from test_data t3 where t3.c1=t1.c1)= (select count(*) from test_data t4 where t4.c1=t2.c1) order by 1 asc, 2 asc
but I don't find the way to refilter to group the data as expected. The idea is find subsets and show the set of data and values in column c1.