Extract Index DDL From Database?
Apr 27, 2011Is there a way, i can extract index DDL from my database?
View 1 RepliesIs there a way, i can extract index DDL from my database?
View 1 RepliesHow to extract the ddl of constraints of a user in a database?
View 6 Replies View Related I am implementing GG 11g r2 for 12C database. But i am getting below error. My question Why Goldengate needs specific package ... Since this is homogeneous & heterogeneous.
/u01/12c_database_software/goldengate/dirtmp.
2013-08-30 05:28:44 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext1.prm: Positioning to Sequence 66, RBA 25067536, SCN 0.0.
2013-08-30 05:28:44 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, ext1.prm: ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_INTERNAL_CLKM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored.
2013-08-30 05:28:44 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
Database details
----------------
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM' and object_type in ('PACKAGE');
no rows selected
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM';
no rows selected
11:53:51 SQL>select * from v$version
11:53:55 2 /
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Solaris: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
All, I have an infuriating problem and I'm hoping for some advice. Well actually it is a number of problems but I'll use one example as a microcosm.I'm going to use as the example:
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 912K(100)| |
| 1 | HASH JOIN | | 3412K| 940M| 826M| 912K (1)| 03:02:28 |
| 2 | HASH JOIN | | 3412K| 787M| 699M| 748K (1)| 02:29:39 |
| 3 | HASH JOIN | | 3610K| 657M| 557M| 560K (1)| 01:52:07 |
[code]...
Now then, this query runs perfectly and as expected (direct reads/DB scattered reads) until it hit step 13 in the plan (FTS of T6). At step 13 it basically enters into a (rather fatal) process of reading almost nothing but undo exclusively.
Now then, I wonder why this might be. Of course the natural reaction is to say "they data is changing, its read consistency (optional rookie insult)". Now, I found this hard to believe that the volume of data could be so extensively changed, keeping in mind its a good sized table and it's doing a full access.
I checked the DBA_HIST_SEG_STAT(DB_BLOCK_CHANGES_DELTA) view for the period and in the run up, i.e. the explain plan steps 2-12 took approx 20 minutes. In this window, less than 1% of the blocks in the table T6 were changed (approx 3-4k out of a 500k block table).Nearly every access of this table is reading undo, yet since this query started, 99% of the table is unchanged. But the query is behaving as if everything it reads has altered since it began.
To be clear, I understand the read consistency etc and expect it, what I DONT expect is the virtually the ENTIRE table scan operation routing via undo when I'm accessing the full table, yet hardly any blocks are altered after I start running.
Note this query/application seems to fetch the rows in blocks of 1k, its almost as if its re-executing for each fetch, but I wouldn't have expected that.
In summary:
>Query starts
>hits final table
>almost exclusive undo reads
>under 1% of block changes in read table since query began
Is there any other way anyone knows of that might cause a session to read into undo beyond a session changing data after the query executed?
How to Pick / Extract the java class files from the database.? We have not maintained the latest codes in the oracle application server where java class code is residing.
All the Java Classes are available only in database. So we need to pick the latest java class code from production environment. In TOAD we tried but all class objects are listing at the left side but we are unable to take the code. So how can we take the latest codes(java classes) from the Production Database as a backup.
i want to extract data from my local database table to text file using plsql
View 4 Replies View RelatedI am getting the below error when I run utlrp in my database.
ERROR at line 1:
ORA-08102: index key not found, obj# 423571, file 6, block 113416 (2)
ORA-06512: at "SYS.UTL_RECOMP", line 760
ORA-06512: at line 4
I have run dbverify on File_id=6 but it did not return any corrupt pages or blocks.
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0
14:33:05 06/26/12 begin logging
14:33:05 06/26/12 event
14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I
14:33:16 06/26/12
14:33:16 06/26/12
[code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
Our database size is 100GB and i removed few records from a table and rebuild the index.The size of Index reduced considerably after Index rebuild but now i see our database size increased to 115GB.I know online rebuild creates second index which is also removed after the build is finished then why the increase in database size?Is there a way so it shows up more space ?
select sum(bytes) from dba_segments where owner='abc' and segment_name='abc_index_1';
8GB
ALTER INDEX abc_index_1 REBUILD ONLINE ;
select sum(bytes) from dba_segments where owner='abc' and segment_name='abc_index_1';
2GB
We have a requirement from the customer to start using data and index compression in our 11g database.. Is this something available in Oracle 10g,11g without any additional costs? We are not sure if this will work with our application so we will have to test it in-house, is it possible to compress the existing table data/index to test it out?
View 3 Replies View RelatedHow to verify if rebuild an index is required in database.
View 9 Replies View RelatedI have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
View 6 Replies View RelatedI have a global index and I want to convert it to local index.Is there a way to recreate local index with out dropping the global index.
I can create a local index first and then drop the global index. But is there a way to create it with out dropping the global index, just convert it.
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using
sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.
I have a query which had a join:
a.c1=b.c1 and a.c2=@var
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?
How to force an index if the table not using the index?
View 10 Replies View RelatedWe have occurrences of enq : TX - index contentions in the database. Using the SQL ID, we have identified the INSERT statement and the table which they are trying to insert.
This table has almost 25 different indexes, some of which are unique as well.I am wondering how to identify the actual index causing issue, out of these 25 indexes.
Is there any way to pin point to the name of index which is causing the lock?My plan is, once the index is identified, I would like to check the extents and inittrans and other attributes of this index to fix.
Can we create non-cluster index on a clustered index?
View 5 Replies View RelatedAny on give explanation for difference between Index and Clustered Index?
It will be great if i get explanation how memory allocation and Execution takes place.?
What is the difference between index rebuild and index rebuild online.
View 3 Replies View RelatedI have a requirement where the data between [] or ][ has to be extracted from a string.
Here is my situation :
INPUT:
[abc] [def]-[ghi][jlk]
OUTPUT:
row_num field_name
1 abc
2 <blank_space>
3 def
4 -
5 ghi
6 null
7 jkl
Is there any option available in DBMS_METADATA.GET_DDL in such a way that I can extract the script (user creation+grants)only for that particular schema?
View 5 Replies View RelatedHow to extract DDL of DBMS_JOB in sqlplus ?
View 30 Replies View RelatedI have to extract a csv file running a sql file.
SQL>@d: estEndItem_Vio_Item_Material_Violations.sql;
This works on the sql prompt. I have to do the same using schedular now for which I want to do the same embedded in a procedure.
create or replace procedure test_csv
as
begin
@d: estEndItem_Vio_Item_Material_Violations.sql;
end;
How can I run the sql file in a procedure.
I'm stuck on 1 scenario
I have the following table:
Create tabledrop table age_rate;
CREATE TABLE age_rate(age_0_4 NUMBER(4),age_5_20 NUMBER(4),age_21_34 NUMBER(4),age_35_44 NUMBER(4));
-------------------------------Insertion
INSERT INTO age_rateSELECT 45, 50, 60, 90
FROM dual UNION ALLSELECT 45, 50, 60, 88
FROM dual UNION ALLSELECT 40, 50, 60, 90 FROM dual UNION ALLSELECT 5, 50, 60, 88
FROM dual ;
-------------------------------Query on table
SELECT * FROM age_rate; Query Output age_0_4 age_5_20 age_21_34 age_35_44 45 50 60 9045 50 60 8840 50 60 905 50 60 88 Required outputRate Min_age Max_age
----The below rate is for age band 0_445 0 445 0 440 0 45 0 4
--The below rate is for age band 5_2050 5 2050 5 2050 5 2050 5 20
--The below rate is for age band 21_3460 21 3460 21 3460 21 3460 21 34
--The below rate is for age band 35_4490 35 4488 35 4490 35 4488 35 44
Rules--I have all data in rows so each column in row create separate rows and add 2 columns automatically Min_age and Max_age and insert value on these column on basis of column name for example if column name like age_0_4 then put 0 in min_age and put 4 in max_age means values for Min_age and Max_age extract from the basis of column name. I don't know if it is possible or not
I am using Oracle database version 11.2.1 and would like to extract the level change and level start date where reason_code is 'PROMO' split by ID.
The test script is below:
create table test(
id number,
start_date date,
reason_code varchar2(10),
level number
)
insert into test values(001, '01-JAN-13', 'PROMO', 2);
[code]....
The expected output would be:
Fields - ID, old_level, old_level_start_date, new_level, new_level_start date
e.g.
001, 2, '08-MAR-13' , 3, '05-MAY-13'
002, 4, '13-APR-13' 5, 02-MAY-13'
Here i face probelm that he numbers must be follw by DOT "." , this is not correct if the statment only conatines numbers without DOT that not extract. As the
SELECT REGEXP_SUBSTR ( 'hello to 8898989898989 jkjk nnnm mnj'
, '([0-9]+.[0-9]*)' || -- Starts with digit(s) (may or may not have digits after .)
'|' || -- or
'(.[0-9]+)' -- starts with decimal point
) AS result
FROM dual
;
but mean i have to add . after numbers . as follow
SELECT REGEXP_SUBSTR ( 'hello to 8898989898989 jkjk nnnm mnj'
, '([0-9]+.[0-9]*)' || -- Starts with digit(s) (may or may not have digits after .)
'|' || -- or
'(.[0-9]+)' -- starts with decimal point
) AS result
FROM dual
;
but this is not right
i want to extract numbers without DOT also.
I have a table:
create table employee_function
(
id_emloyee number,
id_function number
);
with clients and their functions.
I want to extract all employes who has 2 functions (ex:id_function = 1 and id_function=2)
how to extract hour part from a date?
suppose my date is like
29-mar-2004 09:20:34
i wanna get only hour from the above date-
I'm a SAP consultant working in SQL on NT platforms. This is the first conversion from Oracle that I have done. My client has provided us with a "Cold" backup of the Oracle dbase on a HD formatted in Unix, I have the partition mounted and I'm able to view the files. I have the ORDATA folder with all the .DBF files.
Q: How do I extract the data from the .DBF files. I need to export to something workable with SQL.
Original database was on Unix, I'm operating on Windows platform.