We use apex and my professor wants us to import a table model into apex. everytime i try and import my table it says "your export file is not supported". i asked my professor what to do and he said "You should gen a ddl file from designer. Then use that to upload into apex and then run.
Can I disable tracing just on deadlocks events on 11gR2?One of our applications provoke several deadlocks. I've warned the developers team several times, but it's taking time to solve it. However, these deadlocks generate trace files of several sizes (from 300M to 3G ..)The Deadlock Graph on them are very useful but the PROCESS STATE section are too long, and until the developers team fix the problem they are just filling up my user dump dest. Can I disable them or disable the PROCESS STATE section from them?
I have read it in books that flashback uses undo data to create the flashback data or to flashback the database to a time in the past.Then, what is the role of archive files in flashback operation. Why it is mandatory to turn on archiving before turning on flashback. Also, if you remove the latest archive files, you can NOT flashback the data to a time in past (Oracle complains of missing archive files).
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size; NAME TYPE VALUE ----------- streams_pool_size big integer 48M
I am trying to migrate a table to a new table that has the field sequence changed and also has a new field added. My main question is if it is possible to have datapump add values to the new field in the target table.For example:
-original table has fields a, b, d, c -new table has fields b, c, d, a, e
I want to load the new table and also include adding values for field e. In this case, field e is a year field, so it should be loaded with '2012'..Does datapump have the ability to do this? Is reorganizing the fields going to cause me any problems? We are on oracle version 11.2.0.3
I have created a amortization SELECT statement that amortizes loans based upon a table of loan information. The problem is this: when I run an ITERATION on numbers, the select statement returns a value but the ITERATION seems to be reading zero. This leads to a bunch of zeroes in the resultant table.
P C_USER1 C_USER9 C_USER11 C_USER10 C_B1 C_B2 C_BUDGET C_USER12 0 Jones 131828.81 3.348 72 0 0 131828.81 2023.55 0 Jones 131828.81 3.348 1 367.8 -367.8 132196.61 2023.55
As you can see, in column C-B2, despite referencing the column C_USER12 which should have 2023.55 as a value in it, it is returning the mirror opposite of C_B1 which leads me to believe that the system is reading a ZERO in the column despite displaying 2023.55.
I have one report consist of two user parameters like FROMDATAE and TODATE and two queries in data model..
The 1st query is..
SELECT WONO,MCV_DATE,QTY FROM MCSHOP1 WHERE MCV_DATE BETWEEN :FROMDATE AND :TODATE;
It created two user parameters.ie FROMDATE and TODATE.
And 2nd query like this
SELECT MCVN FROM MCSHOP1 WHERE WONO=:WONO OR WONO LIKE 'RW%'||:WONO;
I dont know how to make 2nd query in data model. becoz the WONO will come from 1st query and LIKE command is there..But I tried in formula Column....but it returns more than one row...
URL....Topic: The Execution Model for Triggers and Integrity Constraint Checking
Oracle uses the following execution model to maintain the proper firing sequence of multiple triggers and constraint checking:
1.Run all BEFORE statement triggers that apply to the statement. 2.Loop for each row affected by the SQL statement. a.Run all BEFORE row triggers that apply to the statement. b.Lock and change row, and perform integrity constraint checking. (The lock is not released until the transaction is committed.) c.Run all AFTER row triggers that apply to the statement. 3.Complete deferred integrity constraint checking. 4.Run all AFTER statement triggers that apply to the statement.
We use Apex 4.2 with Apex Listener. Recently we patched apex to 4.2.2 and now when we try to create a form based on a table, after selecting the schema and table we get the following error: "You do not have access to the schema that you are importing.
Import failed" I've seen some posts regarding this error but nothing that works with my current situation. The database grants are in place.
I created a new form for Oracle Apps, At first when I ran the form from the application all the fields backgrounds were black, so I changed the background in the property palette to white and foreground to black.
Now it shows fine but when I close this form and open another those fields are now blacked out. What should I do, I know the problem comes from the new form.
We would like to used Oracle secure backup using SAS interface on an IBM Autoloader Model TS-2900. Would it be possible to configure oracle secure backup using SAS?
I am trying to use model clause to get comma separate single row for multiple rows. My scenario is like this:
SQL> desc test1 Name Null? Type ----------------------------------------------------- -------- ------------------------------------ ID NUMBER VALUE CHAR(6)
SQL> select * from test1 order by id;
ID VALUE ---------- ------ 1 Value1 2 Value2 3 Value3 4 Value4 5 Value4 6 7 value5 8
The query that I have is: SQL> with t as 2 ( select distinct substr(value,2) value 3 from test1 4 model 5 ignore nav 6 dimension by (id) 7 measures (cast(value as varchar2(100)) value) 8 rules 9 ( value[any] order by id = value[cv()-1] || ',' || value[cv()] 10 ) 11 ) 12 select max(value) oneline 13 from t;
We have many many APEX applications in APEX 3.0 running on a 10.2.0.4 database that needs desperately to be upgraded. As a test, I've set up a clean 11gR2 database and copied the production APEX database into it via datapump. I set up APEX Listener, as I don't have any OAS sitting around and the EPG doesn't seem to be supported for APEX 3.0... the Listener doesn't say one way or the other.
When trying to log in, I get the login page, but it tries to reference files such as apex_get_3_1.js while I only have files such as htmldb_get.js in my images directory in production. I noticed it is looking for what appears to be 3.1 files instead of 3.0 files... which concerns me.
The APEX listener appears to be more than just a Java PL/SQL gateway.
Is there a minimum version of APEX the APEX Listener supports?
For applications and Timesten databases on the same server we can use direct model to gain the base performance. But I want to know that how big heap size of JVM to be set for my java application when enabling direct model?
Does my application need more head memory when direct model than other local communication protocols, such as Unix domain socket or IPC? Supposing my Timesten database takes 12GB memory from OS, does it mean I need specify the same size for JVM heap(-Xmx12G)?
I want to do an import of a table from my old dump file.The same table is already there in the development box but few more columns are added to that table while testing so in the dump those columns are not available.
TABLE_EXISTS_ACTION=TRUNCATE The new table SQL> desc "TESTINVENTORY"."TTRANSACTION" Name Null? Type ----------------------------------------------------------------------------------- -------- -------------------------------------------------------- TRANSACTIONIDNOT NULL CHAR(26) BRANCHCODE NOT NULL CHAR(3) EXTERNALSYSTEM NOT NULL CHAR(3) EXTRACTSYSTEM NOT NULL CHAR(3) OWNERBRANCHCODE NOT NULL CHAR(3) TRADEREFERENCE NOT NULL CHAR(20) [code]...
I'm trying to add edmx file in my project for first time. I want to choose the oracle provider ODP.net but cannot find Oracle in the data source list. I have oracle 11g installed , odp and odt installed and can access it from the solution as well. I saw the Oracle listed under data source when I tried to connect the solution to the database through server explorer. The solution is connected to Oracle database through ODP.
Export and import of data in oracle forms...i have created 02 boutons one for export his trigger like this:
eclare alrt number; v_directory varchar2(200) := 'c:ackup'; --- that if the C Drive not the Drive that the windows had installed in it. path varchar2(100):='back_up' ||to_char(sysdate,'dd_mm_yyyy-hh24_mi_ss'); v_exp varchar2(200) := 'exp hamada/hamada2013@orcl file = ' ||v_directory ||'' ||path ||'.dmp'; [code]....
this code is correct he expot not only the data but also the creation of the table ....for exemple i do export and everything is good until now and i find the .dmp in the folder backup .. but when i deleted all data from my app and try to import this .dmp iit show me error it tell me thet the table phone is already created...just export the data of phone not the creation of table and data ???? or how can i import just the data from this .dmp ??
I'm trying to import data from a csv file format which is located in a CLOB column in a single record in the database. I want to import the data that is contained in this CLOB into a table. I am having limited success using JH_UTIL. Here's the script that I am running (which works):
set serveroutput on; declare v_lines jh_util.stringlist_t; v_values jh_util.stringlist_t; begin for rec in (select 1 id, ac.clob_content csv [code].......
The problem is when the file gets too big, I get a the following error:
Error report: ORA-06502: PL/SQL: numeric or value error ORA-06512: at line 6 06502. 00000 - "PL/SQL: numeric or value error%s" *Cause: *Action:
I assume this means because the file size is too big. Is there any way to process larger "files" (CLOB data)
I have oracle database 11.2.0.3 OS Hp-ux I have one table with around 5 lac of rows I want to copy this table to another oracle database 11.2.0.3 with different OS windows server 2008 R2.
I want to load data to a table and from a simple file text, using a Vb.net application which will connect to a oracle 10g , or a SqlServer or a MySql database, depending the params.
When i connect to a SqlServer Database i use the sql command "BULK INSERT CODPOSTAL2 FROM file.txt with( DATAFILETYPE = 'char',FIELDTERMINATOR = ';', ROWTERMINATOR = ' ')"m" and it works fine.
With a DB Mysql i use "LOAD DATA INFILE file.txt INTO TABLE CODPOSTAL2 FIELDS TERMINATED BY ';''" and also works.
My problem is with Oracle. I tried the same example as MySql, but it gaves the error "wrong" ou "unknown command". I also tried in Sql*Plus but it seems to not recognised the command "LOAD".
Another thing, i can't use the Oracle Loader, it must be like this.
I have two columns in excel which i need to import in oracle table , but the problem is one column is of type date , i want the same date format to be maintained in table too.
We have a table in Oracle9i database with around 14 million records and we would like to import that table into 10g database with similar structure. We have exported the table from 9i database and would like to import the table into 10g database within same schema name with different table name as we already have the table with same name in 10g database in same schema. Is it possible to import a table with different table name?
We have a way around to import the table into 10g database in another schema and then push the data into our main table but want to know whether the above requirement is possible.
I have a table as below. This table is not partitioned.
create table t1 ( d1 date, n1 number not null );
[Code]....
I took an export dump of the above table and after that I renamed the table t1 to t1_old. Then I recreated the table as below with a default constraint on d1 field.
create table t1 ( d1 date default to_date('01/01/1100','DD/MM/YYYY','NLS_CALENDAR=GREGORIAN'), n1 number not null
[Code]....
But the problem here is the data import is taking too much time than what I expected. I can't afford a maxvalue partition here as of my DBA team mentioned if you add maxvalue partition adding partition later in a stage is difficult on this table
apply in this scenario and make the import faster. I am using oracle 10.2.0.1.0 version.
1) what is the harm apart from the below listed? if I'm not using the index, eventhough it exists.
Lets say, I've index on salary column, and I'm using "select * from employees;"
Harms:
A) It takes more cpu resource to compress the bitmap format for the index value (incase of insertion). B) Hope there is no extra effort need for update the value of indexed column's value.
2) Eventhough if I'm not using the index, the above restriction will applicable for the index column normally (?) Then how we can say unused index column is causing the performance issue.
3) if you say index has positives and negatives, then playing with the index (adding and removing as frequent as we need) is also not a solution. Am I right.So in overall we have to accept the negatives of the index.