I have a schema whereby a table is not joined with other tables.
the info on that table can be gotten manually (by doing a query) and then using that info in another query. so is there a way of getting info from that table?
Exception: ERROR at line 1: ORA-22285: non-existent directory or file for FILEOPEN operation ORA-06512: at "SYS.DBMS_LOB", line 716 ORA-06512: at "usr.LOAD_CLOB_FROM_XML_FILE", line 39 ORA-06512: at line 1
I dont want to create directory as such ('/usr/home/oraj/log') as it is already exists and all log files sits there.
CREATE OR REPLACE PROCEDURE Load_CLOB_From_XML_File IS dest_clob CLOB; src_clob BFILE := BFILENAME('/usr/home/oraj/log/', 'abc.log'); dst_offset number := 1 ; src_offset number := 1 ;
when i am writing dump from external table, it is accessing records from dump.but when i am trying to access other dumps(create thru expdp) it is giving error.the logic i am following is mentioned below-
CREATE OR REPLACE DIRECTORY "DIR_GMS" AS 'D:Gopal_works est_env_files'
GRANT READ ON DIRECTORY dir_gms TO gopal; GRANT WRITE ON DIRECTORY dir_gms TO gopal;
New point: -- taking export thru expdb expdp hr/hr tables=EMPLOYEES directory=DIR_GMS dumpfile=HR_EMP.dmp logfile=expdpEMP.log then i created one EXTERNAL TABLE TO access it.
i have a table which has 2 columns.1st column has userId and the other contains an xml data as a link.on clicking that link a new file opens containing the data in xml format.
I have customized my sqlprompt. I put the code for that in a script p.sql.
Now I log into sql from multiple directories as I have different scripts in different directories. How do I access p.sql which is not in the currect directory.
Also I wanted to know if I could change directories from the sql prompt.
I have installed Oracle 11g R2 on Oracle Linux and Oracle Instant Client on Windows 7. I am trying to access the server from the Oracle client but I am getting the following error:
ORA-12560: TNS: protocol adapter error
I have set the below TNSNAMES.ORA file on the client machine and the LISTENER.ORA file on the server:
Pinging between the client/server works fine. I have turned off the firewall on both the client (Windows firewall) the Server (IPtables) but got the same problem!
I have also set the ORACLE_HOME, TNS_ADMIN and PATH in the Windows environment variables.
We are accessing data from the server ADM.WORLD by using DBLINK.We got the following error.
PL/SQL: ORA-04052: error occurred when looking up remote object sysadm.PS_HP_INC_ELIG_VW@ADM.WORLD ORA-00604: error occurred at recursive SQL level 1 ORA-28000: the account is locked ORA-02063: preceding line from ADM
For that we checked in the server ADM.WORLD for the account the account is showing locked .After that we successfully accessed the object sysadm.PS_HP_INC_ELIG_VW@ADM.WORLD.
For next day also the account is locked.Why the account is frequently locking.
I am very new to APEX_COLLECTION. I have problem in accessing APEX_COLLECTION that I created. Below is the pl/sql code I have written:
declare l_query varchar2(200); cname varchar2(300); Begin APEX_COLLECTION.DELETE_COLLECTION(p_collection_name => 'ACTION_NAMES'); l_query := 'select name from test_table where id in (''n406'' , ''n409'' , ''d080'' , ''o4505'' , ''a1593'')'; [code]........
It is throwing following error on execution:
ORA-01422: exact fetch returns more than requested number of rows I want to display the names collected using APEX_COLLECTION package and also use it in further processing within the pl/sql code block.
Apex info:
Application Express 4.1.0.00.32 DB details - Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 Web server architecture - APEX listener Browser(s) and version(s) used - Chrome version 24/ Firefox version 3.6 and version 18
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
I am having 3 oracle 10.2.0.5 version standard edition databases running on windows platform on 3 different servers. OEM is configured for all the 3 databases and we are able to access these OEM from their respective servers.
As per my knowledge, I should be able to access the OEMs of all 3 databases from my local machine. But Iam facing problem in accessing the OEMs from my local machine.
what changes need to be done so that I can access the OEMs of all 3 databases from one single local machine rather than checking it by logging into their respective servers.
The database is running in archivelog mode and we have a standby with Maximum performance.There is no RMAN backup..We have noticed there is block corruption while accessing some tables.Now i would like to know are the corrupted blocks also replicated to the physical standby? Is there a way to recover the data from these corrupted blocks without shutting down the database ?
I have installed Oracle 10g on one system and Oracle developer on another machine, means i have different machines for DB Server and Application server. It all working excellent inside the company premises, but if i want to access my Oracle DB and application server outside the company then it gives me problem..how to access application (forms and reports ) remotely outside the company...having same db.
We have been upgrading our servers to Server2008 and are getting..
[ORA-3134: Connections to this server version are no longer supported.]
..using the drivers we used to use in XP and Server2003 to access a legacy Oracle7 db. Connections to this db are needed for typical CRUD functionality by multiple applications, some written in Classic ASP and some in C# .NET 3.5 & 4.0. I have tried ODBC drivers (System.Data.Odbc) and also ODP (Oracle.DataAccess.Client) to no avail.
Any existing driver solution to make this connection without have to resort to a custom HLI interface?
I would think we aren't the only ones needing to access Oracle7 from Server2008.
We have a scenario where Oracle Database R1 is installed on Windows System and the Client is present on Linux. Both the system is on same network. We can access Windows -2-Windows using TNS entry.
I am unable to connect to this scenario where Database is on Windows and Client is on Linux.
I have a Host system of windows 8, installed the Virtual Box Latest edition.The guest OS is the oracle Linux that came with the Pre-Built Developer VMs from Oracle.The VM is set to HOST ONLY network setting. Now, the thing is that i want to access the Oracle in the guest machine from my windows host machine.the guest OS shows the IP address as 192.168.56.101. I can ping the IP address from the HOST machine, but cannot connect theSQL developer to the oracle in the guest OS from the host machine. It keeps on saying the Network adapter cannot connect. do not say that install the oracle in window itself. I actually want to do the same on my macbook pro as that is my primary machine.
In a trigger(on update of a table t1) I am trying to write, I am doing an insert on t2 accessing ':new' values of the update on t1.
But in my Insert statement, I am having get one of the column values from another table. How can I write my insert statement in such a way as to insert values contained in ':new' pseudo columns and a select from another table. Below is my insert statement in the trigger : -------
IF (:old.GROUP_YELLOW <> :new.GROUP_YELLOW) THEN INSERT INTO TEST.W_THRESHOLD_LOG (THRESHOLD_LOG_WID, CHANGE_DATE, MEASURE_TYPE_WID, MEASURE_NAME, CUSTOMER_WID, CUSTOMER_NAME, USER_ID, CHANGED_ITEM, PREV_VALUE, NEW_VALUE) VALUES(TEST.W_THRESHOLD_LOG_SEQ.NEXTVAL, SYSDATE, :new.MEASURE_TYPE_WID, 'Rolling Stabilty' , :new.CUSTOMER_WID, 'Customer1', 'User1', 'GROUP_YELLOW', :old.GROUP_YELLOW , :new.GROUP_YELLOW); END IF; -------
In the above code if the hardcoded value 'Customer1' need to be picked from another table, i.e .
SELECT NAME FROM W_CUSTOMER_DIM WHERE CUSTOMER_WID = THRESHOLD.CUSTOMER_WID
how can I rewrite my query to the above value from the select into my insert statement..?
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?
We have occurrences of enq : TX - index contentions in the database. Using the SQL ID, we have identified the INSERT statement and the table which they are trying to insert.
This table has almost 25 different indexes, some of which are unique as well.I am wondering how to identify the actual index causing issue, out of these 25 indexes.
Is there any way to pin point to the name of index which is causing the lock?My plan is, once the index is identified, I would like to check the extents and inittrans and other attributes of this index to fix.