BLOB Based Schema - Performance Of Loading / Insertion
May 17, 2012
Would using a blob based schema load noticeably faster than a binary_double based schema?
CODEBlob Scenario:
Load 1 row: 5 columns (1 integer column, 4 blob columns) of size X
VERSUS
CODEDouble Scenario:
Load 10,000 rows: 5 columns (1 integer column, 4 binary_double columns) of size X
While the benefit of using the rows approach is obviously the capability to query the values, I'd like quick answer concerning the loading/insertion performance. Associative array binding is used for loading from a .NET client. Also, would the answer also hold true for 200 columns instead of just 5 columns.
We are getting the below error frequently from the application while doing insertion/dataloading to a table. The mentioned error is in the Primary key index
Error: 'ORA-01502: index 'INDEX_NAME' or partition of such index is in unusable state'.
I set the value SKIP_UNUSABLE_INDEXES = TRUE using the command 'ALTER SYSTEM SET SKIP_UNUSABLE_INDEXES = TRUE' to avoid this. Again we are getting the same error and Every time Iam rebuilding('alter index INDEX_NAME rebuild') the index and doing the DML Operation.
I need to insert data from one schema table to other schema's table in same database.The thing is columns are not equal.so when I am trying to use insert statement it is throwing error as not enough values. The situation is explained clearly below.The insert stmt is implemented in second schema whose table name is b.
Table created. SQL> ALTER TABLE B ADD 2 CONSTRAINT B_PK1 3 PRIMARY KEY 4 (ID);
Table altered. SQL> create sequence b_seg start with 1;
Sequence created. SQL> insert into b select b_seg.nextval,lexcom.a.* from lexcom.a,dual; insert into b select b_seg.nextval,lexcom.a.* from lexcom.a,dual * ERROR at line 1: ORA-00947: not enough values
So,for table b in ID column sequence needed to be inserted and other columns need to be taken from table a.I can understand the error is because two tables are not having equal columns.So,the insert stmt is throwing error.
I can manually write by taking columns from a and b and write insert stmt as follows,but this is tedious process.
SQL> insert into b(ID,Name,rollno,address)select b_seg.nextval,lexcom.a.Name,lex com.a.rollno,lexcom.a.address from lexcom.a,dual;
3 rows created.
But this is time taking and I had tables which has many columns to be inserted.So is there any other way to solve it and implement insert stmt.
We have an application with many separate databases (one per customer). Given they share the same business requirements (service hours, change mgmt etc), we're interested in potentially consolidating the separate DBs (which are relatively small) into separate schemas within a fewer no of databases to reduce the overhead.
Our issue is that the application is hard-coded to use a specific administrator and application connection user name. Changing this is unfortunately not an option.
Given this limitation, is there any possibility to map a generic user into a customer-specific schema based on the database service that they connect to? Each customer connects to different database services but may use the same user name. We considered using private synonyms but this seems to acheive the opposite (i.e. many different users could connect and map to a single users schema). One thing to point out is that where there is a single user name, it is acceptable for a single password to be used across the different customer DBs as they will be a single admin/user.
For example, we have a table ACCOUNT (snowflake dimension containing other dimension keys) and I have many fact tables based on this dimension. Normally data warehouse load happens like first dimensions needs to be loaded and then facts. Our frequency of loads is 30 mins.
To increase the rate in which the data will be available in the facts (as its a financial application), am considering to have two batches one with dimension and another one with fact (came to this conclusion as there is no dependency like first dimensions to be loaded then only fact) just the update might get missed sometimes. But if I do that, when dimension gets loaded, it will be read in the facts in another session. Will this affect the performance ?
LOADING (insert/update) and selecting data from table at the same time. Will it affect the performance in any way.
I have 1M Records coming from an External Data source as a Flat File (using ETL). Now I need only Yesterday's data only to load in my Database Table.
this can be done using Bulk Load and Filter.
write the CODE.
Second Part:-
Hint: if I need to update only those records been updated Say the Address1 field is updated. So this records need to update in my Master Customer Table.
If I have many fields in table and any records that are modified (coming to me from External Datasource as a Flat file) how to identify and update that record in my Master Customer Table?
I did a search on this topic and did see the ASK TOM response that storing all varchar2 fields as (2000) or what not is a bad idea based on an array fetch that developers may use etc. However I'm not sure that applies to my specific question, and the other examples he gave certainly didn't apply. So I'll pose the question a different way:
Question #1: Is there, for example, a performance difference between setting a field as varchar(2000) and varchar(25) if I was just running a native SQL query using a front end tool like TOAD?
Question #2: If I also need to index that field, will it take longer to index a varchar(2000) field than a varchar(25) field, assuming the same data is in both fields?
This table has a query where one of the condition is AND STATUS <> 'C'
Now the data is as following
select count(*) record_count, status from new_business group by status;
record_countstatus 4298025C 15N 13Q 122S
I want to know if following index would be useful in this case while the condition in where clause is
"AND STATUS <> 'C'"
create index nb_index_1 on new_business(case when status in('N','Q','S') then 1 else NULL end); Or create index nb_index_1 on new_business(case when status ='N' then 'N' when status='Q' then 'Q' when status='S' then 'S' else NULL end);
I tried it on a sample table but the index is simply not picked up even when hinted following are the db level settings
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following All records in tables in 2 schemas are same. All indexes are same Analyzed the tables Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
I have several databases that i've recently upgraded from 9i to 11g. With all of them, the automatic stats gathering process has worked just fine every night during the maintenance window.
However, i have this other database that i created and it seems that the only stats being gathered are on the sys and system schemas and not the actual schema that holds all of our tables.
I did some searching, but i'm not sure i was using the right search terms, because i came up empty.
BANNER ----------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Solaris: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
The issue is slow insertion in particular table(i.e A Table) it means insertion in all other tables(i.e B, C, D tables) in same schema is going properly but only when i am trying to insert in one particular table(i.e A table) in same schema it takes long time to complete insertion. Daily insertion is 6000 rows.
I have check all the details like Tablespace size, Analyzing of table, Analyzing of indexes and all. There is no any error alertlog file.
i have a simple insert statement in oracle form, which is sucessfully run in oracle database(sql). but it is in oracle form trigger: WHEN BUTTON PRESSED as in this format:
Declare cnt number; begin select count(*) into :control.cnt from ol_lcy_ndc where aan=:control.aan and event_id= 'ACL'; if cnt = 0 then insert into ol_lcy_ndc (form_no, aan, regno, event_id, doev, status, edt, ludt, username) values (12345, 255257,10030661,'ACL', SYSDATE, 'DRAFT', SYSDATE, SYSDATE, ' '); else update ol_lcy_ndc set LUDT= to_date('09-09-2009','DD-MM-YYYY') where aan=:control.aan and event_id= 'ACL'; end if; end;
but after giving count in cnt, it is not doing anything like insert or update from oracle form, but both the statements are correctly execute in oracle database. may problem is linked with some properties of property palette, upto my knowledge i checked: insertion allowed--> yes.
i have another problem with clob column , when i try to insert data in it through the stored procedure then it shows an
error- 'ORA-01460: unimplemented or unreasonable conversion requested clob' and this error especially arise when data to e inserted in clob column , have more than 4000 characters.
sql>exec pi_test(1,'sysdate','text of clob column'); ORA-01460: unimplemented or unreasonable conversion requested clob
In procedure "update_emp", i am updating a row based on p_empno and if it is not present i.e. SQL%ROWCOUNT = 0, then I am inserting that row into emp table.
where as in procedure "update_emp1" , first I am checking whether any row with that p_empno is present or not,if presentthen update the row, else raise an exception to insert the row.
In both procedure, I am doing the same thing, But I am unable to understand which one is good and why
create or replace procedure update_emp( p_empno int) is begin update emp set ename='raj' where empno=p_empno;
I am using oracle form builder 6i and oracle database 10g
1:I have table named 'info' column name 'InfoId'and some other.And another table named 'Handing' with column names HtId, Value1 and value2.
2:I made form that consist of three data blocks, first block takes criteria and second block display record against that criteria from table 'info'.
3:i want checkboxes agianst that display record,and want that when I select some checkboxes against 'InfoId' these selected 'InfoIds'
should save in another table named 'handing' in column 'HtId'.and in same table data in column value1 and value2 will be inserted through textboxes that are in the third datablock of the thae same form .
Restrict field after insertion before commit. i mean when user inputs the data in one field and moves to next field.when ever he want to return back on the previous field it can't be edited.before commit;
i want to insert the text lenght containing more than 4000 characters, that column datatype is in CLOB Even though in CLOB we can able to store upto 4GB. Its not allowing me to insert more than 4000 characters at a time , but we can able to insert by splitting the data by 4000 and can append remaining characters But i am receving the text contains more than 4000, that how can i split the data upto 4000
how to define user defined exceptions for cases like, ==> when anyone tries to insert string values without using single quotation marks " '...' "? ==> update the column which is not present in table.
how can I define user defined exceptions for such cases?
How to avoid Junk character insertion in oracle table. I have prepared scripts like this Say
customer - info
After insertion the data is inserted like below in production
Customer ¿ info
We are using command prompt for script execution in production environment. I am using PLSQL developer and SQL developer for development. i cannot see junk data in PLSQL developer and latest SQL developer , but its caught in old version of SQL developer. Also in Application also i can able to figure out junk data.
i have requirement like this i don't know abt trigger
create trigger with the below: Tables: TAB1 TAB2 Create a trigger, if any insertion in TAB1 then records should get inserted into TAB2 create a trigger, if any updation in TAB1 then record should get inserted into TAB2 Create a trigger, if any deletion in TAB1 then record should get inserted into TAB2
I am working in form 6i, EBS11i. I have a multi record data block, i am inserting checked records only using below logic.
ON-INSERT Trigger:
if checkbox_checked('block.checkbox') THEN insert_record; end if;
Requirement: Let us say, i have 4 records, i checked 2 records.. inserted them. Now if i want to insert other 2 unchecked records, it's not accepting, is it possible to insert records which are not checked after insertion.
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
I would like to create a table in another schema(CBF) as already exist in my schema(TLC) without data but related indexes,synonyms and grants should be include.
How could I do this without using export import. I am using TOAD 9.0.1.