SQL & PL/SQL :: Large Tables / Remove Unnecessary Columns
Mar 14, 2011
I have a large table with 450 column and we are only using nearly 170 columns and our BD block size is 8k.The DBA informed that there is an row chaining happening in the Database.My question is if we have data available in 170 column .why row chaining is happening.
The DBA informed us to remove the unnecessary columns .. Does those empty columns have any impact on the chaining.If we increase the size of DB block to 32k . does it will resolve the issue.
I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?
To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.
I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.
As you can see, I removed the first four columns because the eventkey is the same. In this case, there is only the applicant which is different.So the rest should be blank.
is it possible to upload very large files in oracle's tables. For example 1-2 gigabyte video file or even more. In other words is it possible to use oracle as file server to upload very large files and store them?
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
Create small functional indexes for special cases in very large tables.
When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.
Example
create index vh_tst_decode_ind_if1 on vh_tst_decode_ind (decode(S,'I','I',null),style)
It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.
create index vh_tst_decode_ind_if3 on vh_tst_decode_ind (decode(S,'I','I',null), decode(S,'I',style,null) )
To access the record can by like:
SQL> select --+ index(vh_tst_decode_ind_if3) 2 style ,count(*) 3 from vh_tst_decode_ind 4 where 5 decode(S,'I','I',null)='I' 6 group by style 7 ;
I have to compare 2 tables on a columns's value.The 2 tables have same column as Regn_no.I just need to have the result in True Or False from a Query if the particular value of Regn_no is found in both the tables.
Temporarily I using a bit different and not good query which is as follows:
select count(*) from tab1 where regn_no in (select regn_no from tab2) and regn_no = 'UP78AN7890';
sample1 A B C D E F -------------------------- 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5
sample2 G H I J ---------------- 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5
consider the above tables what i want is some thing like this
output G H I J A -----------------
this can also be done through a select statement through choosing the columns which you want i don want thatbut what i want is columns of entire table sample2 and only one column in sample1
Desc Table A (account) Account1_id Account2_id name, empid
Table B (Bill ) BillNo Advertiserid agencyid total vvalue
I need to pick up total value from table B where the unique combination of advertiser-Agency id is the same as the given account1_id -Account2_id combination in table A for each employee id.
In other words my output should be like
Empid | Account_id (should be same as advertiserid)| Account2_id (same as agencyid) | sum(total_value) for this adv-agency combination.....
objective: Get the total value from table B for each unique account1-account2 combination (advertiser-agency in other words) .
I am not sure if I should use a correlated subquery or how to handle the situation....Right now I am just checking the two columns separately like this:
select.......from a,b where b.advertiser_id = a.account1_id and b.agencyid = b.account2id
Is it correct to do so? I have a feeling that I am missing something if I join them separately like this.
I am trying to get information about the columns of tables in Oracle databases. In MySQL, show columns provides good deal of information.But in Oracle, we do not have such equivalent call."describe" shows only the primary key.
But it does not show whether any column has unique constraint or an index.
I created a column in a table with unique constraint (but not a primary key), but it was not listed with CONSTRAINT_TYPE = 'U' in dba_constraints view, but was listed with CONSTRAINT_TYPE = 'C' - Check constraint.
I've recieved a recent request wherein the requirement is to swap values between columns across multiple tables in a database.Following is a visual sample of what needs to be done.
im trying to select columns from different tables dynamically in a function . The parameter for the function will be table name and column id's, In this number of columns may vary . Is it possible to have dynamic %rowtype to store the cursor value in it.
I am trying to update multiple columns from one table based on the results of another table So I have 3 tables as follows
HISTORYSUMM_SNAPADM_CHOICE My SQL code is loosely SELECT SUM(H.HIS1), SS.SNAP1, AC.ADM1FROMHISTORY H, SUMM_SNAP SS,ADM_CHOICE ACWHERE H.HIS2=SS.SNAP2AND SS.SNAP3=AC.ADM2GROUP BY SS.SNAP1, AC.ADM1
This works, and I am able to SUM the column as I need with the right numbers. I altered the SUMM_SNAP table and now I want this summarized column to be in the table I tried using UPDATE, but there is no FROM clause to let me do the table join/group by
UPDATE SUMM_SNAPSET SUMM_SNAP.SNAP3=SUM(H.HIS1)FROMHISTORY H, SUMM_SNAP SS,ADM_CHOICE AC WHERE H.HIS2=SS.SNAP2AND SS.SNAP3=AC.ADM2 GROUP BY SS.SNAP1, AC.ADM1
The above is obviously wrong - but just trying to show whatI was thinking What would be the best method to get the numbers from the SUM into a table?
i have two tables test1 and test2. i want to update the column(DEPT_DSCR) of both the tables TEST1 and TEST2 using select for update and current of...using cursor.
I have a code written as follows :
DECLARE v_mydept1 TEST1.DEPT_CD%TYPE; v_mydept2 TEST2.DEPT_CD%TYPE; CURSOR C1 IS SELECT TEST1.DEPT_CD,TEST2.DEPT_CD FROM TEST1,TEST2 WHERE TEST1.DEPT_CD = TEST2.DEPT_CD AND TEST1.DEPT_CD = 'AA' FOR UPDATE OF TEST1.DEPT_DSCR,TEST2.DEPT_DSCR; [code].......
The above code when run says that it runs successfully. But it does not updates the desired columns[DEPT_DSCR].
It only works when we want to update single or multiple columns of same table...i.e. by providing these columns after "FOR UPDATE OF" I am not sure what is the exact problem when we want to update multiple columns of different tables.
As the title of this topic illustrates, i'm having trouble dumping relevant data from columns of different tables. I am using isql*plus. I have three tables appropriatly related. A 'course' table, 'student' and 'next_of_kin' tables. I have many students enrolled on various courses but only a hanfull of courses offer the module option 'Database Systems'. I have no 'module' table but i know the three course names which provide the module option. I intend on producing a report hich lists all students enrolled on the courses which provid the module option 'Database Systems'.
I have attempted the report but i keep getting a 'cartesian product' displaying all next_of_kin names instead of the appropriate. Also i am struggling to come up with the right WHERE statement to depict only the three courses which provide the module option 'Database Systems' as defined by 'courseNo' in both 'course' and 'student' tables.
Here is the most recent attempt:
--set echo off --set pagesize 24 --set feedback off --set linesize 78 col A format 99999999 heading 'Student No'
I have a two different Databases. I created a db link in DB 1 to connect to DB 2 and it is working fine when I select data from any table. but I have one table in the DB2 which has a column with user defined data type . so when I try to select this column from DB 2 by using the DB link it gives me this error :ORA-22804 remote operations not permitted on object tables or user-defined type columns.
I want to create a report by using one field and one text as columns name in layout but display the all the columns. I mention the 5 column names in query.how can I write function in summary column.
I am running a fairly busy Oracle 10gR2 DB, one of the tables has about 120 columns and this table receives on average 1500 insertions per second. The table is partitioned and the partitioning is based on the most important of the two timestamp columns. There are two timestamps, they hold different times.
Out of these 120 columns, about 15 need to be indexed. Out of the 15 two of them are timestamp, at least one of these two timestamp columns is always in the where clause the queries.
Now the challenge is, the queries we run can have any combination of the 13 other columns + one timestamp. In reality the queries never have more than 7 or 8 columns in the where clause but even if we had only 4 columns in the where clause we would still have the same problem.
So if I create one concatenated index for all these columns it will not be very efficient because after the 4th or 5th column the sorting would no longer be very useful and I believe the optimiser would simply not use the rest of the index. So queries that use the leading columns of the index in sequence work well, but if I need to query the 10th column the I have performance issues.
Now, if I create multiple single column indexes oracle will have to work a lot harder to maintain all these indexes and it will create performance issues (I have tried that). Besides, if I have multiple single column indexes the optimiser will do nested loops twice or three times and will hit only the first few columns of the where clause so I think it will kind of be the same as the long concatenated index.
What I am trying to do is exactly what the Bitmap index would do, it would be very good if I could use the AND condition that a Bitmap index uses. This way I could have N number of single column indexes which the optimiser could pick from and serve the query with exactly the ones it needs. But unfortunately using the Bitmap index here is not an option given the large amount of inserts that I get on this table.
I have been looking for alternatives, I have considered creating multiple shorter concatenated indexes but this still would not address the issue since many queries would still not be served properly and therefore would take a very long time to complete.
What I had in mind would be some sort of multidimensional index, I am not even sure if such thing exists. But essentially it would be some sort of index that could serve a query efficiently regardless of the fact that the where clause has the 1st, 3rd and last columns of the index.
So considering how widely used Oracle is and how many super large databases there are out there, this problem must be common.
1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.
The ALERT_LOG_30499.log. contains this error message:
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 82, maximum: 80)
the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.
2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)
3. When I increased the varchar2 size to 120+ it gave me this error message:
[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log
LOG file opened at 03/09/11 14:46:20
Field Definitions for table ALERT_LOG Record format DELIMITED BY NEWLINE Data in file has same endianness as the platform Rows with all null fields are accepted
Fields in Data Source:
MSG CHAR (255) Terminated by "," Trim whitespace same as SQL Loader
TABLE DDL:
create table alert_log ( msg varchar2(80) ) organization external ( type oracle_loader default directory BDUMP access parameters ( records delimited by newline ) location('alert_damistst.log') ) reject limit 1000;
**** QUESTION I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?
example of a piece of the results from " select * from alert_log; "
MSG -------------------------------------------------------------------------------- Thread 1 advanced to log sequence 5254 (LGWR switch) Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log Thread 1 cannot allocate new log Checkpoint not complete Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log Wed Mar 9 14:33:09 2011 Thread 1 advanced to log sequence 5255 (LGWR switch) Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log
I keep getting the "ORA-01401:inserted value too large for column". No biggie - I've dealt with this multiple times before (but obviously not enough in this instance).
The data being entered is a SINGLE digit number - a number like 1, 2 or 3 - nothing fancy, just a plain straight everyday single digit number. The field in question is / was set as field type "Integer". Now, there is no set field size for integers! - not in Oracle anyway. Since it wasn't happy, I decided I'll try field types of 'Number' and also "Varchar2" set to 10 bytes. I have deleted the column from the table and re-created it as well.
Here's the even more puzzling bit: I can INSERT data into this field, BUT I can not UPDATE the field with the exact same data. The data is being inserted from a csv file. The same exact csv file used to insert works, but the same data in the same file will not update only that particular column.
If I delete the specific column data from the csv file, all goes through fine. If I hard code the update for the field (eg SET field2 = '1' or even SET field2 = ' ') it still doesn't work. So I know it is not the csv file that is causing problems. I deleted all data from the csv file except the field in question - still no luck.
So after eliminating: 1. The field type 2. The field length 3. The data being inserted 4. The external source of the data