My requirement is while sending a data file from oracle to mainframe, first 3 bytes for the header row should contain low values and trailer should contain high value.
How to pass oracle values to mainframe high and low values ?
I have a table with zip codes and their plus four values. For ex: zip code of 10000, which has corresponding plus four values of 001, 002, 003, and 008, 009, 010. The issue is just that--a zip code can have sequential plus four values, and then it will skip several potential plus four values, and then start again. I would like to assign a low plus 4 value and high plus four value to a zip code, keeping in mind that the plus four values are not always sequential. So, it would be similar to this:
zip plus4 low plus4 high 10000 001 003 10000 008 010
I declare a cursor for a table with 8000 records, when I fetch the cursor this message appear ORA-03113: end-of-file on communication channels when the fetch reach the 6500 element, What is the problem here?. All data are ok, not null fields.
other problem
How I can reuse a cursor, I declare a cursor for a table where code_part = 300, then I fetch all elements until end cursor, then I close the cursor, Then I declare the same cursor again for the same table where code_part = 359, but it is not successful, when I tried to fetch the cursor again the cobol program show me the last record for the first code, How I can restart the cursor or delete it or freed the cursor position?
I have a priority column(possible values are 1 or 0) in a table where
i need to get 70% of high(1) and 30% of low (0) and max i can fetch for select is 50 records.
Eg1: Total if i have 60 in which 20 high and 40 low then 70% of 20 = 14 and remaining should be taken from Low i.e. 36 from low. so total will be 50 transactions.
Eg2: Total if i have 60 in which 40 high and 20 low then 70% of 40 28 + remaining should be taken from Low i.e. 22 from Low.
Eg 3: If i don't have any high then total should be picked from low vise versa.
I have below query but it is having problem when there is no low priority.
SELECT ID,PRI FROM temp tbl WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1 UNION ALL SELECT * FROM temp WHERE pri = '0' AND ROWNUM < 50-(SELECT COUNT(*) FROM temp WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1)
I have setup of two node (prod-db1, prod-db2) clustered database 11gR2 on windows 2008 R2 server. Everything is working fine at this setup.
My question is: Is there a way to make the Enterprise manager Database control run and be available at both the nodes independently. What I see that even at node 2 (which is prod-db2) the EM-DBControl is (https://prod-db1:1158/em) - which means the agent is running at node 1 (prod-db1) only.
My question is that how to make the EM-DBControl also run separately at prod-db2. My idea is to make the high availability of EM-DBControl (in case Prod-db1 machine is down).
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
I am trying to compare the ranges of low pair and high pair,if they are within the range then source_conn_id should remain same,else it should be updated to null,which i had written it in else block.How can I implement the IF block and what to write in that block so that source_conn_id can remain the same.
SQL> CREATE OR REPLACE PROCEDURE fp_complements_src(p_id varchar2,ftr_con_id varchar2) 2 AS 3 BEGIN 4 FOR i IN(SELECT SOURCE_CONN_ID,LOW_PAIR,HIGH_PAIR FROM COMP_TEMP1 WHERE SOURCE_CONN_ID=ftr_con_id)
We had a massive jump in cluster waits specifically; gc buffer busy acquire during an RMAN backup. We identified the cause of the waits to a few hot blocks with a table that may well need re-building in terms of ITLs and PCTFREE (Although I thorght ASSM would manage PCTFREE and PCTUSED..)
What happens during an RMAN backup that may cause huge cluster waits on hot tables? Is there some crazy redo issue going on or maybe flashback?
Either way the spike is there and we can pinpoint the activity on the database but just dont understand why RMAN would cause the issue.We have just found that the LARGE_POOL has not been set, in fact there is no SGA_TARGET either! Could this have an effect on RMAN and cluster waits?
I am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
We have a large table of equally sized data blobs in our Oracle system and we'd like to select the whole table once into the memory. The corresponding tablespace is stored in SSD fast disks and is managed by ASM. However the achievable select speed (reading data into memory) of Oracle is not satisfactory. When we store the data in SSD disk using custom methods (e.g. in SQLite DB files) and load then into memory by multithreading (8 thread) techniques, the speed is more than 15 times that of Oracle.
Is there any way to optimize the oracle and ASM for increased full table select in our case. We tried FULL TABLE SCAN and PARALLEL hints and DB_FILE_MULTIBLOCK_READ_COUNT too, but with no success.
FYI:Our data blob is about 4.2kB and each DB_BLOCK_SIZE is 8kB. The ASM segment is configured for AUTO-ALLOCATE. The table is partitioned by HASH. Our oracle system is not RAC.
when I was analyzing high CPU utilization issue, I saw that the most of the top PID's were INACTIVE in database. But it was utilizing more than 4% CPU. how it is utilizing CPU without doing any work in database?
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
I am using Oracle 11g On windows 2003 Server, and in the task manager, tnslsnr.exe comsuming approx 1 GB of RAM. After restarting the listener it came to some mb(20-30). but Again after few days it raies and consume lots of memory..
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013 Thread 1 advanced to log sequence 448546 (LGWR switch) Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579 Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189 Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013 SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
I have deleted lot of records in a table.Would oracle be able to insert in the empty blocks generated from deletion of records without bringing the high water mark down.
I have 15 million of records as csv, want to load through sqlloader Is sqlloader is the right option to load high volume of data? I have loaded with 2.5 lac records which has taken 4 mins to load.
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
I am having performance issues on a query in a production environment that I cannot replicate in our test environment. Our test environment is an import of production. Version information is:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi PL/SQL Release 10.2.0.3.0 - Production CORE 10.2.0.3.0 Production TNS for Solaris: Version 10.2.0.3.0 - Production NLSRTL Version 10.2.0.3.0 - ProductionWhen I run the query in test, I get the following results using TKPROF [code]....
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWSThis performance problem started a few weeks ago and the problem seems to be in the high number of consistent reads during fetch. The DBA tried restarting the instance and gathering fresh stats on the tables but that has not made a difference. Now he wants to export the tables, drop the schema and re-import.
I am trying to discern the difference between Shrink and Move and their impact on the High Water Mark of a table.
My understanding is that MOVE in effect rewrites every row of a table ( hence why it can deal with row chaining ) whereas SHRINK basically moves existing rows in a table 'down' the table into any available free space. This is why MOVE takes a table lock whereas SHRINK takes a row lock. What I am trying to discern is - does MOVE and SHRINK effect the high water mark and does both reallocate space and give it back to free space for the tablespace ? I believe MOVE does reduce the HWM and give freed space back to the tablespace. I am not so sure about SHRINK.
I have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.
I have one tablespace called U01. This tablepspace contains 31 data files. Due to high water mark I was unable to most datafiles. Since my database running onair application they will not provide me downtime to move the tables. Is there anyway to fix the high water mark without getting downtime window? almost 700+g space unused. I need to reuse them asap because running out of space with in asm diskgroup.
I have created 5 materialize view with REFRESH FORCE option.
REFRESH is done with every 10 min. My destination database is MySQL. DBlink is created in ORACLE 10g Materialize view is created using DBLINK.
Since refresh materialize view done with every 15 min, my Redo generate gone on toss.System is generating 1 redo file for 1 min. The size for redo log is 500MB.Since materialize views are created base on mysql table, system is doing complete refresh every time which causing to generate more and more redo.
Is there any setting/option/or solution to reduce high redo generation.