I'm in a bit of a pickle with some SQL syntax, and while my Oracle-fu is weak, an associate with SQL skills is also stuck. I am developing a database for my department that is backed by Oracle Enterprise 11g and the front end is ChemAxon (basically, chemistry database software). The short of it is that I have one large table with every compound from different groups. Each row has a flag (column PI) on it that indicates which group it belongs to (let us say 'Smith' and 'Jones' for example). The software can apply row-level filtering which will basically only show the rows that a particular username are allowed to see.
I have a FLAG_TABLE table, which contains two columns: USERNAME and FLAG. An example set up is this:
USERNAME FLAG
------------- ----------
smith_minion Smith
jones_minion Jones
jones_minion Smith
The software automatically applies a SQL filter that begins with:
SELECT DISTINCT CHEMAXON.STRUCTURES."ID" FROM CHEMAXON.STRUCTURES WHERE
I can set up filtering to work dandy, such that when smith_minion logs in, he can only see rows with the Smith flag (or jones_minion can see both Jones and Smith) by using the filter:
STRUCTURES.PI IS NULL or STRUCTURES.PI IN(SELECT ALL FLAG from FLAG_TABLE where USERNAME = '__IJC_USERNAME__')
('__IJC_USERNAME__' is how the third party software passes the logged in username into SQL)
But we have a new problem: there also exists a master user (chemaxon) who needs to see every row no matter what the flag. The row filtering is applied no matter who logs on, so we need to set up the SQL filter to basically say "If chemaxon, then select all rows, otherwise, select rows based on the username". This is proving a problem as the select statement must be prefaced with SELECT DISTINCT CHEMAXON.STRUCTURES."ID" FROM CHEMAXON.STRUCTURES WHERE.
I've tried using DECODE in a few capacities, but I am always thwarted. My last attempt was:
STRUCTURES.PI IS NULL or STRUCTURES.PI IN (DECODE('__IJC_USERNAME__','chemaxon',(!='something'),(SELECT ALL FLAG from FLAG_TABLE where USERNAME = __IJC_USERNAME__));
But this throws a ORA-00936 missing expression error, with a * under the != portion (I test it by replacing '__IJC_USERNAME__' with 'CHEMAXON').
I have this 3rd party tool, which are running SQL queries. I need to see what queries the tool is running and capture them. I enabled tracing but that's not working, as the tool doesn't establish connection, it connects when it has to run the query and then gets disconnected.
I had third party reporting system,in which i could not avoid time selection in the screen. So if a user selets date and time or date alone .i should get only in date foarmat i.e(DD-MM-YYYY)
i had tried from third party tool liket0_char (P_date,'DD-MM-YYYY').But i get error ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Upgrading one of the 9i database to 11g that supports a 3rd party software - ***Vendor provided an over-simplified documentations*** and recommends moving from 9i to 10g before going to 11g. A few changes from 9i to 10g.
1) db_block_size 2) character sets etc.
Anyway, created the database DBUPGTEST on 10.2.0.1 (ultimately moving to 11gR2, so no point patching to 10.2.0.5, is there?) with all the parameter changes. At this point, these are the 2 db in play:
Current production db: Oracle 9i - PROD dbname => 2048K db block size Current migrating db to: Oracle10g - DBUPGTEST dbname => 8192k db block size
Steps According to vendor notes / documentation, 1) create db 2) exp full from 9i 3) imp full to 10g
Problems 1) import ended with completed unsuccessful. 2) user accounts are imported (because their default tablespace is USERS - which had already been created during DB creation); but, user accounts (schema accounts) with a different default tablespace are not imported. Looking at the imp.log - seems like it's complaining about the db_block_size during tablespace creation - which explains why the schema accounts are not imported; because the tablespace was not created.
My questions 1) How do I import to 10g? Can I create all the tablespace in 10g first? Then import? Will it crap out because it already exists? Or will it import the objects in the schema? 2) How do I refresh data from PROD? Remember this is 9i and most of the expdp functionalities are not available. And I cannot re-exp and re-imp because there are steps (sql to run) after moving to 10g to fix some software upgrade table mappings. If I re-exp from 9i and re-imp to 10g, won't I have to re-run all those steps before the apps will run?
Can we execute more than one insert statements at a time (eg 10) in database and givecommit at the end of insert statements or else give a commit one by one after each insert statements ?
There is a requirement in my database that I want to restrict the user from directly running queries on database from third party tools such as pl/sql developer and toad.
There is a utility in SQL product_user_profile through which this can be done but it is only restricted if you run the query through sql plus. If I want to restrict and (give suppose select,insert) to a user for directly running queries through PL/SQL.
I know, importing a *.CSV-table is easy using a few clicks with the GUI, but I want to know, how to import and create new tables, using the existing *. CSV-files, with SQL statements.
I have a column in a database that contains both numerical and char data. I would like to be able to do two different things (two different queries)
1. divide the numerical data in the column by 10 and leave the char data alone (just return it)
2. detect the numerical data in the column and treat is as a different value so I can run averages & counts on it while disregarding the char data
I'm not at all sure how to do number 2. I thought a CASE statement would work for number 1, but then I realized CASE doesn't like different datatypes:
select case when '1234' = 'checked' then 'checked' when '1234' = 'gen.nograde' then 'gen.nograde' when '1234' = null then null else '1234'/10 end as "GRADE" from dual
From database server, I need to monitor the details about the sql statements which are being currenlt running in client machines.
I tried with V_$SQLTEXT view where I can only see the SQL statements, hash value,address,SQL_id. but I'm not able to get the user name,name the client machine .
find out these details?.Which Data Dictionary i need to use ?.
I have a question regarding the SQL statements embedded behind the Self Service Pages in Oracle Applications.
Taking an example of Oracle E-business Suite, Is there a way to check what SQL statement is hard coded or embedded in a particular required Self Service Page?
My teacher taught the lesson of DML statemnts, he told us how does merge works , but he did not give us any query for that,provide query for Merge and if possible then explain it too , I am using Oracle 10g Sql Plus.
DDL statements automatically end with COMMIT the user transactions in which they appear. Foe example:
------------------------------------------- create table mytable01 (i integer); insert into mytable01 select 1 from dual; create table mytable02 (i integer); -------------------------------------------
After all three statements are executed, data are committed in mytable01.
In the Oracle DB server SQL guide we read:
"DDL commands, such as TRUNCATE, will fail if there is any DML command active on the table. A transaction will block the DDL command until the DML command is terminated with a COMMIT or a ROLLBACK."
But I executed the following without any problem:
------------------------------------------- create table mytable (i integer); insert into mytable select 1 from dual; commit; update mytable set i = 2; alter table mytable add (j integer); -------------------------------------------
So where's the truth? Are DDL statements blocked when they refer to an active object accessed from a DML or not?
I have been out of work for 2+ years. Am about to start a job next week doing Oracle back end, Forms, and Reports development among other things. I was asked if I could take a look at 3 report requirements and give an estimate on how long it would take to correct errors in these reports. All I have is a user requirement document stating what the report is currently doing and what it should be doing, a partial screen print of an Oracle Form showing correct data, and a sample report page showing incorrect data.
I am finding it rather difficult to give an estimate without seeing tables, relations, code, etc. Is it me or does this seem nearly impossible?I do not have access to their system yet so cannot view the database or run select statements, run the report, etc. All I have are the documents I listed above.
Recently we have upgraded from 11.1 to 11.2 . But after upgrade SQL statements that are running fine in 11.1 was running for hours in 11.2. Statistics are collected 100%...
connect the following concepts/information I've been collecting. This is not my field but I'm interested in filling some of mine conceptual/technical gaps.
From a JDBC perspective, one of the benefits of Prepared (and so Callable) statements have over the regular ones is that the statement is "compiled"(*) once and then reused (performance gain).
(*) for SQL statements: building of parse tree and exec.plan
In which way can this notion be extrapolated to invocation of Oracle Stored Procedures through CallableStatements? (After clearing my doubts, I may end concluding that the only relevant feature of CallableStatements is their capacity to deal with stored procedure invocations)
According to procedure's precompiled execution plan SQL compilation implies execution plans generation PL/SQL compilation implies P-code generation and, SQL statements (from PLSQL code) are treated no differently by Oracle than SQL from Java or C/C++. These SQLs will be parsed and execution plans for those SQLs created. ... When the PL code executes the SQL statement, only then does the SQL engine receive the SQL, parse it, and create an execution plan for it.
Therefore, even when the stored procedure can be parsed and cached in SGA (through the OracleConnection.preparedCall("proc") invocation), the SQL statements won't be effectively compiled until they are executed, right? And going deeper, will those SQL statements be cached to be reused in future invocations of the containing stored procedure? Is this a characteristic of the regular stored procedure execution in Oracle? or is it due to the CallableStatement "origin"?
It started out pretty simple where I had to update about 40 contacts in the database and I would have 40 separate update statements I would run. The task has jumped to about 300 contacts and I don't want to run 300 update statements. I would like to run this all at once. For example:
update contact set name = 'Name1' where row_id = 'row_id1'
update contact set name = 'Name2' where row_id = 'row_id2'
update contact set name = 'Name3' where row_id = 'row_id3'
I'm attempting to audit unsuccessful Select statements in order to trap a problem we're experiencing with our application. I have set the AUDIT_TRAIL initialization parameter to DB_EXTENDED, and bounced our database.
I've issued the AUDIT SELECT ANY TABLE WHENEVER NOT SUCCESSFUL command, and when I issue a SELECT statement as an application user, nothing appears in SYS.AUD$ even though the application has issued a select statement which returned no rows.
We are doing select statements. I have 3 tables that I need to get information out of and I believe I need to use a join but everything I put into oracle gives me an error.I'm doing the selects for a pharmacy and have a customer table, a drug table, and a prescriptions table.
I need to write a select statement that shows what customers are taking what drugs and how many mgs they take
1. I Wnat to analyze the buffer cache hit ratio. This is what i did.
DECLARE bufcac NUMBER(10, 2); BEGIN
[Code]....
2. I would like to analyze the PGA and determine what percentage out of the maximum allocated PGA is being used. I tried the code below but can't find the percentage. Sad
One of the procedures that am working on is failing with ORA-0000: normal, successful completion error.
The procedure has got several update and delete statements and have logging enabled after each step. The problem with that again is, each time the log table gets updated thereby losing the history of until what point the procedure ran successfully.I have this issue only in production environment and unable to simulate it in dev environment which limits my options of troubleshooting the procedure code. I was using SQLERRM in the code.
Is there a way I can identify the bad records/ record causing this issue? Am very new to PL/SQL and do not know how to proceed with this.How do you debug this sort of issues??(where one procedure internally invokes another one which again invokes other one etc)