Server Administration :: How To Apply Composite Key On A Table
Jul 21, 2011How can i apply composite key on a table?
View 2 RepliesHow can i apply composite key on a table?
View 2 RepliesI want to upgrade 10.2.0.1 to 10.2.0.5.
Can I straightly apply 10.2.0.5 or need to sequentially apply one by one?
can we apply 11.1.0.7 Patch Set on a 10.2.0.3 database server? i mean are patch sets cumulative or do i have to install the patch specific to my release the issue is i have the below error with rman: RMAN-03009: failure of delete command on ORA_DISK_1 channel at ORA-19633: control file record is out of sync with recovery catalog
View 7 Replies View RelatedI have an employee table which has a primary key and a self referencing foreign key, as shown here
create table employee (
id not null,
name not null,
department not null,
supervisor_id not null
,constraint constraint_1 primary key (id)
,constraint constraint_2 foreign key (supervisor_id) references employee (id));
Now if i make the primary key composite, as shown below -
create table employee (
id not null,
name not null,
department not null,
supervisor_id not null
,constraint constraint_1 primary key (id, name)
,constraint constraint_2 foreign key (supervisor_id) references employee (id));
Oracle is throwing the following error -
ORA-02270: no matching unique or primary key for this column-list
How can this error be fixed without changing the composite primary key?
I have created an non unique index lk_fein on lookup_fein( code,map_id,trash). When I check the explain plan it does a full table scan on lookup_fein. if I force it to use index by it does and the cost also decreases.
SQL> SELECT WORK_FEIN,
2 NON_FEIN ,
3 FI_FEIN ,
4 MFEIN ,
5 TOTAL_FEIN ,
[code]...
We need to apply the the 11.2.0.2 database server patch and this is a 12i Ebusiness suite env. Do we need to take any precautions(other than stopping the apps) on the APPS side when we are applying the patch on the database?
View 2 Replies View RelatedI want to apply patch 13 to 11.2.0.3 database for windows server 2008 R2. I saw the pre requisites and found that opatch should be 11.2.0.1.9 or later. My optach is 11.2.0.1.7. I downloaded the patch # 11846294 and unzipped under the opatch directory. In one of the step it asked me to run the following command
$ORACLE_HOME/OPatch/oplan/oplan generateApplySteps <bundle patch location>
but then it says oplan is not recognized. I have set the Oracle_home and path variable
my path is C:Oraclegrid_homein;C:Oracledbhome_1in;C:Oraclegridin;%SystemRoot%system32;%SystemRoot%;%SystemRoot%System32Wbem;%SYSTEMROOT%System32WindowsPowerShellv1.0;C:Program FilesArcGISArcSDEora11gexein;C:Program FilesXIVhost_attachin;C:Oracledbhome_1oplan
Note that my database version is 11.2.0.3 with ASM
DECLARE @MainTable TABLE (UniqueID INTEGER, Category VARCHAR(200), WeekDate DATETIME, VALUE INTEGER)
INSERT INTO @MainTable VALUES(123, 'Shirts', '10/07/2011', 5000)
INSERT INTO @MainTable VALUES(123, 'Shirts', '10/14/2011', 8000)
INSERT INTO @MainTable VALUES(124, 'Pants', '10/07/2011', 4000)
INSERT INTO @MainTable VALUES(125, 'Shorts', '10/14/2011', 8000)
INSERT INTO @MainTable VALUES(126, 'Shoes', '10/21/2011', 9000);
--select * from @MainTable;
[code]...
The query works with all the CTEs up to the last select statement. Oracle does not support the OUTER APPLY statement, how should the last piece be written to make it work in Oracle?
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
We deleted millions of records from a table.
1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.
2.Will re org table and index improve the database performance ?
There is a huge table,i want to compress it,how can i do? alter table tb_my_table compress After executed the sql,the size of table have any change.
View 2 Replies View Related My table can not shrink, why?
SQL> Alter Table tb_hxl_user Shrink Space Cascade;
Alter Table tb_hxl_user Shrink Space Cascade
*
ERROR at line 1:
ORA-10635: Invalid segment or tablespace type
SQL> desc tb_hxl_user;
Name Null? Type
----------------------------------------- -------- ----------------------------
STATEDATE NOT NULL DATE
USERNUMBER NOT NULL VARCHAR2(13)
PROVCODE NOT NULL NUMBER
[code]...
concept of composite and candidate keys. i have gone through all the definition part but did not understand the concept of creating candidate key for example:-
composite key
create table test(id number,pincode number(4),name varchar2(15),primary key(id,pincode));
table created with composite bcz i took pair of id and pincode.
here base on the same example want to create candidate key with the syntax.
understanding composite primary key.
I know what is composite key and how it stores the data.. but i am not able to find when i have to choose the columns to be composite keys.
We are running 11g (11.2.0.3)We have a "working table" that is empty at the beginning of the day.Then we start adding rows (insert) with a key column called STATE with a value of 100.At the same time, there are other apps that pickup data in state 100 , process that data and change that state to 200 or 300.There is also another app that pickup data in state 200 , process that data and change that state to 300 or 400.
So in summary, the data on that table is at the beginning empty, then all the rows are in state 100, they slowly move to different states (200, 300, etc) and by the end of the day, they are all in 400.
My question is what would be the best way to collect stats on this table?
I was thinking to create an hourly job to collect stats on that table:
exec dbms_stats.gather_table_stats (
ownname => 'SCOTT',
tabname => 'WORK_T
I have a table: desc STG_XML
Name Null Type
------------------------------ -------- ------------------------
ENTITY_ID NOT NULL VARCHAR2(100 CHAR)
ENTITY_TYPE_ID NOT NULL NUMBER
SOURCE_ID NOT NULL VARCHAR2(512 CHAR)
XML_SCHEMA_ID NOT NULL NUMBER
JOB_ID NOT NULL NUMBER
FINGERPRINT NOT NULL VARCHAR2(100 CHAR)
ENTITY_XML_DATA CLOB()
ARCHIVED NUMBER(1)
CREATION_DATE TIMESTAMP(6)
MODIFICATION_DATE TIMESTAMP(6)
ARCHIVING_DATE TIMESTAMP(6)
CREATED_BY VARCHAR2(50 CHAR)
MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
alter table STG_XML shrink space COMPACT;
but I haven't gained any space.
How can i check whether a partition tbale have default partition?
View 9 Replies View Relatedi want to rename a table that has partitions.
alter table
testora.oldtablename
rename to
testora.newtablename;
ORA-14048: a partition maintenance operation may not be combined with other operations
how to find a table is updated and when the table is updated.
View 1 Replies View RelatedDDL used to create a table that is partitioned by day, then rolled up to a month using the interval partitioning technique.
View 3 Replies View RelatedOne of our solaris machines is running Oracle 8.0.3
A table reached the 2 Gb size and oracle failed due to the operating system file size limitation.
The information in the table is not relevant and can be deleted, but the table contains a lot of indexes.
I would like to know the best procedure to delete the information and reduce the size of the file.
I want to get the stale stats for Table resides at APPS schema. Is there is any table or view exists to get the details like DBA_STALE_STATS or anything? Currently I am checking LAST_ANALYZED column from DBA_TABLES?
View 2 Replies View Relatedis it possible to track the last access to a table?
View 2 Replies View RelatedI got query to investigate, how table owner is changed.
SQL> conn test/test
Connected.
SQL> create table guri(name varchar2(9));
Table created.
SQL> select table_name from user_tables;
no rows selected
SQL> conn /as sysdba
Connected.
SQL> select owner,object_name,object_type from dba_objects where object_name = 'GURI';
OWNER OBJECT_NAME OBJECT_TYPE
---------- ----------- ------------
CISREPLICA GURI TABLE
Why do we gather table statistics manually ?Is it because of database performance.
I know In Oracle Database 10g, Automatic Optimizer Statistics Collection reduces the likelihood of poorly performing SQL statements due to stale or invalid statistics and enhances SQL execution performance by providing optimal input to the query optimizer.
Optimizer gathers statistics when 10% table rows have been changed.
1)i have 2 SWP TABLES. while dropping a column, i am getting error -
ORA-39726: unsupported add/drop column operation on compressed tables.
2) when i checked compression status, those were not compressed. But as per our code standard, SWP tables should not be in compress mode.
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
NOVAR PAYMENT_SWP DISABLED
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
NOVAR PREPAYMENT_SWP DISABLED
3) as a workaround, i compressed these 2 SWP tables with OLTP option, and then i was able to drop the column from these 2 SWP tables.
4) Below statement is correct or not ?
IF A TABLE USING BLOCK LEVEL COMPRESSION, THEN this error will come - ORA-39726: unsupported add/drop column operation on compressed tables.
if above statement is correct, then how to find out whether table data is using block level compression ?
5) we have DBMS_COMPRESSION.GET_COMPRESSION_TYPE. using this i just tried to find out, but i am getting "1" as output. I am not getting the exact meaning of it.
confirm what is the conclusion on this ?
SQL> declare
rid rowid;
n number;
begin
select max(rowid) into rid from NOVAR.PAYMENT_SWP;
n := dbms_compression.get_compression_type('NOVAR','PAYMENT_SWP',rid);
dbms_output.put_line(n);
end;
/
2 3 4 5 6 7 8 9 1
PL/SQL procedure successfully completed.
SQL>
SQL> SET SERVEROUTPUT ON
SQL> /
1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.PAYMENT_SWP;
MAX(ROWID)
------------------
AAsz4fAHSAAAD3IABs
(ii) 2nd table
SQL> set serveroutput on
SQL> declare
rid rowid;
n number;
begin
select max(rowid) into rid from NOVAR.PREPAYMENT_SWP;
n := dbms_compression.get_compression_type('NOVAR','PREPAYMENT_SWP',rid);
dbms_output.put_line(n);
end;
2 3 4 5 6 7 8 9
10 /
1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.INVOICELINE_SWP;
MAX(ROWID)
------------------
AAsz4ZAEkAAAp8XAAA
ops$tkyte@DEV8I.WORLD> select blocks, empty_blocks,
2 avg_space, num_freelist_blocks
3 from user_tables
4 where table_name = 'T'
5 /
BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
19 35 2810 3
Ok, the above shows us:
- we have 55 blocks allocated to the table (still)
- 35 blocks are totally empty (above the HWM)
- 19 blocks contains data (the other block is used by the system)
- we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total.
- of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.
I have a partioned table that has close to 2 billion rows and a PK of all columns. Becuase of time constrains my APP team wants the PK disabled while they pump into hundreds of thousands of rows with a batch process.
Now I am finding when I enable the PK its eating up close to close to 200GB of temp space.
Is there something I can do to reduce the amount of temp space being used?
I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.
Is there any quickly way to drop a column in a huge table?
I have a table with table_ID, date_created, user_id.
I have sequence, and a BEFORE INSERT trigger, which uses the seq to increment the table_ID by 1, everytime the webpage is saved.
ques:
(1).If i have date_creted defaulted to sysdate in the table, do I need to have it in the trigger?
-- Update create_date field to current system date
:NEW.DATE_CREATED := sysdate;
(2).How can I insert the user_id in the table, each time user SAVE the page ? web page procedure is getting the user info at the beginning. can i add it in the trigger
DECLARE
v_username varchar2(10);
BEGIN
[Code]....