SQL & PL/SQL :: Data Types To Store Large Integer Values?
Aug 15, 2012what could be effective data type to store large integer values like, 50,000; 10,000,000 etc.?
View 3 Replieswhat could be effective data type to store large integer values like, 50,000; 10,000,000 etc.?
View 3 RepliesSince XML-files only contain character data, we could/should store it in a CLOB, rather than a BLOB.
But, One of my friend having a table where a column is defined as bloband came to know that XML data are being stored. I searched for some article with keyword 'How to insert large XML data in BLOB' But did not work.How to store the large xml content in a Blob and How to extract it?
how do i store large string in oracle.
View 19 Replies View RelatedI have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:
I want to calculate the VALUE_LAG with ID = 4 only (-> two values).
create table zTEST
( PRODUCT number,
ID number,
VALUE number,
VALUE_L1 number );
[Code]..
I tried this, but obviously, windows functions are not allowed in the update statement.
update zTEST
set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID)
where ID = 4
How can I do this?
I have a query like this -
SELECT
FIELD_A,
FN_FUNCTION(CARVE_ID, 1) FIELD_B,
FN_FUNCTION(CARVE_ID, 2) FIELD_C,
FN_FUNCTION(CARVE_ID, 3) FIELD_D,
FN_FUNCTION(CARVE_ID, 4) FIELD_E,
FN_FUNCTION(CARVE_ID, 5) FIELD_F,
FN_FUNCTION(CARVE_ID, 6) FIELD_G
FROM TB_CARVE;
When I execute the query, it returns the data (approx - 40,000 rows) in 1 min.But when I try to insert this data into another table (or create a table of this data) it takes me about 2 hours.
Tried using Materialized view, its again the same the refresh takes 2 hours.Basically here, what I am trying to do is the data from the above query is used to update the values in another table.What ever the procedure I am trying it takes 2 hours.
how to display REFS from within a REF. I've not used my exact code here as its quite a big file so i've made a similar scenario to get me point across. Here is the code first:
1 CREATE OR REPLACE TYPE object1 AS OBJECT (
2 object_id NUMBER(5),
3 object_name varchar2(10),
4 object_added DATE);
5 /
6 CREATE TABLE object1_tab OF object1 (object_id PRIMARY KEY);
7 /
8 INSERT INTO object1_tab
9 VALUES (1,'Daniel',sysdate);
10 /
11 show errors;
12 /
13 CREATE OR REPLACE TYPE object2 AS OBJECT (
14 object_id NUMBER(5),
15 object_job varchar2(10),
16 object1_ref REF object1 );
17 /
18 CREATE TABLE object2_tab OF object2(object_id PRIMARY KEY);
19 /
20 INSERT INTO object2_tab
21 VALUES (1,'Developer',(SELECT REF(p) FROM object1_tab P
22 WHERE VALUE(p).object_id = &object_id));
23 /
24 select DEREF(object1_ref)
25 FROM object2_tab;
26 /
27 CREATE OR REPLACE TYPE object3 AS OBJECT (
28 object_id NUMBER(5),
29 object_location VARCHAR2(20),
30 object2_ref REF object2);
31 /
32 CREATE TABLE object3_tab OF object3 (object_id PRIMARY KEY);
33 /
34 INSERT INTO object3_tab
35 VALUES (1,'New York',(SELECT REF(p) FROM object2_tab P 36 WHERE VALUE(p).object_id = &object_id));
37 /
38 show errors;
39 /
40 select object_id,object_location,DEREF(object2_ref)
41 FROM object3_tab;
42 /
As yot can see in the code each object refernces from the previous. When I view the DEREF in the third table (object3_tab) i get the following output:
OBJECT_ID OBJECT_LOCATION DEREF(OBJECT2_REF)
--------- -------------------- ----------------------------------
1 New York [SANTA.OBJECT2]
Contained within the object2 is the following:
SANTA.OBJECT2(1,'Developer','oracle.sql.REF@c4cb4aa6')
How would i view the ref for object1 within object3? I will also need to be able to verify the data in these REF in the new table they are used in.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies View RelatedWe have been recommended to store data in CLOD data type.
Sample data: 1:2:2000000:20000:4455:000099:444:099999:....etc it will grow to a large number.
We want to create a Unique index, for functional reason. Is it advised to create a unique index on a CLOB datatype?
I am migrating sybase to oracle database. A Java developer needs money datatypes.I said to them, please change the cast(<value> as number(19,4) in java code side. but they are not accepted because money data type is used most of the places.
select cast(0 as money) from bank_trans; this sql statements present in java code. I need to create user defined type is equivalent to money datatype.
My steps
I have create user defined data types
create or replace type money as object(money as numbeer(19,4)
select cast(0 as money) from dual;
it shows inconsistent datatypes error.
create or replace type money is table of numbeer(19,4);
select cast(0 as money) from dual;
it shows inconsistent datatypes error.
I have following structures:
SQL> CREATE TYPE address_typ AS OBJECT (line1 VARCHAR2(20), city VARCHAR2(20), COUNTRY VARCHAR2(20));
2 /
Type created.
SQL> CREATE TYPE phone_typ AS OBJECT (area_cd VARCHAR2(10), phone# VARCHAR2(10));
2 /
Type created.
SQL> CREATE TYPE phone_list_typ AS TABLE OF phone_typ;
[code]...
My questions are:
1. I have created an object table person_obj of parent type. I have not created object tables of salesperson_typ and customer_typ and want to store data related to both in person_obj. What is the difference between storing data in an object table of parent type and object table of subtypes? In which case should I consider storing data of all subtypes in an object table of parent type instead of object tables of individual subtypes?
2. Second question is regarding validating the column values before the creation of object instance. Suppose before creating an instance of salesperson_typ, I want to check if salary is NOT NULL and greater than 0. I guess constructor is the only option to achieve this but how full-proof it is if constructors are used for this purpose? Is there any other alternate way?
I have a editing database with an eversions table:
NAME OWNER
---------------------- --------------------------------
WR5936_DN6676 FRED
WR6739_DN7507 FRED
WR12744_DN13994 FRED
WR6739_DN7511 BARNEY
WR6801_DN7513 BARNEY
I have a process database with a pversions table:
SOID
----------------
5936
6739
12744
6739
I need to select from the editing.eversions table all the records that do not have a matching record in process.pversion. The eversion table is text, and has some additional crap surrounding the characters I want to use for creating a join.
I need to implement a "Select All" function in a Data Block with All the Check boxes. It's a Database data block with 10 records to be displayed at a time. Check Box Items are in default layout. The requirement is when user checks/un-checks that select all check box which is placed on the left hand side of the other check boxes, it should select/deselect all the check box database items in that particular record.
View 7 Replies View RelatedI have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
View 11 Replies View Relatedthe large data FOR UPDATE in table column ?
claimClob clob:=claim; -- claim large data
v_buf varchar2(1000);
amount binary_integer:=1000;
position binary_integer:=1;
[Code]....
why the FOR UPDATE don't do nothing ?
I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.
My questions:
1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method.
2) And how can I insert BLOB data using SQL clause.
What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.
My requirement is to get primary key column name and there values of a table and store in a stage table
create table easy(id integer, event_rowid integer,txn_rowid integer,txn_name varchar2(200));
begin
for i in 1..5 loop
insert into easy values(i,'777'||i,i||89,'Ris'||i||i);
[code]...
I was able to fetch primary keys and there values for a table and store them in stage table but only 1 record , I am stuck when there are more than 1 record Here is what i tried
SQL> SET serveroutput ON
SQL> DECLARE
2 V_prim_key VARCHAR2(2000);
3 l_tab DBMS_UTILITY.uncl_array;
[code]...
If you look at the above code in the execute immediate where condition i have used id=1, which is 1 records only but if i change id <3 then there will be 2 records and i will get below error ORA-01422: exact fetch returns more than requested number of rows
Other problem if there are 3 primary keys as in above case i am fetching there values 1 by 1 , is there any way if i can fetch those values once in for all data to be stored in stage table in below format
TAB_NAME V_PEIM_KEY V_PRIM_DATA DT
------- --------- ------- ------
EASY TXN_NAME,EVENT_ROWID,ID Ris11,7771,1 10-Feb-13
EASY TXN_NAME,EVENT_ROWID,ID Ris22,7772,2 10-Feb-13
I Have to write a procedure which takes XML data and inserts into some tables.
If the XML format is fixed then i can use extract function for parsing and can insert into the tables.
But the problem is there is not fixed format for the xml.
Are there any built in packages which takes the xml data for parsing..
I have a PL/SQL procedure which gathers data from multiple places as well as calculates some data. I want to store all this in a materialized view.
So, I created an object type (I've shortened the definitions):
CREATE OR REPLACE TYPE mf_record_type AS OBJECT
(identifier VARCHAR2(6),
name VARCHAR2(100));
Then created the table type of the object:
CREATE OR REPLACE TYPE mf_table_type IS TABLE OF mf_record_type;
Then in the stored procedure defined a variable of the table type:
v_mf_record mf_table_type := mf_table_type();
Then I loop and populate the record type:
v_mf_record.EXTEND(1);
v_mf_record(x) := mf_record_type(v_rec.identifier, v_mf_detail.name);
When all that is done I try and create the materialized view:
EXECUTE IMMEDIATE ('CREATE MATERIALIZED VIEW mf_snapshot_mv AS
SELECT * FROM TABLE (CAST (v_mf_record AS mf_table_type))');
ORA-00904: "V_MF_RECORD": invalid identifier
Am I doing something wrong here? Can't I create the materialized view based on something other than a physical table?
I have string like 'PRASAD,ALLEN,STEWART,SMITH'.
LIKE
COL1 COL2 COL3 COL4
-------------------------------
PRASAD ALLEN STEWART SMITH
I want to store the data into columns using SELECT statement only
how to tune qurey for coulumn wise data saved.because we have to join same table n number of times.for reference go through the following scnarios.
Suppose one table T1 is there and it has two column KEY and VALUE.if we are writing qurey for retriving desire result in row manner we have to join samae table no of times.
KEY Value
agreesWith true
id 1
assessment False
basisForDateOfProgression 1
bestOverallResponse 2
bestOverallUnknownComments data is ok
Qurey:
select * from (
select t.agreesWith from t1 t
)a
[Code]...
In this manner we can join upto bestOverallUnknownComments .so which method we follow to reduce the execution time and performance should be good.
I am facing a problem with utl_http.write_text in my pl/sql application. My requirement is to write data of size>32k. So I used a clob variable in write_text. But still it is showing numeric or value error when the data size is above 8k.
I have read that chunked transfer encoding will work. But I couldn't find out how this is done.
I have a query on , how to view the sample data from a very table which is large in size ( more than 10 million ).
I just need to see some sample data from a large table ( to see what kind of data which is application related ).
My question is :
Select *
from Sample_table
where rownum < 10
is this a Good way to view the sample data ?
I have understanidng that the rownum will be assigined to the rows once all the rows are reteived.
So what is the best way to view ?..I am not sure of any condition to put in the intial time of querying.
I am very new to oracle and SQL.I am trying to create a store proc that will copy 14 day of data into a table and then truncate the original table. When i compile following code....
CREATE OR REPLACE PROCEDURE STOPROC_TRUNCATE
( dateNum IN NUMBER )
IS
BEGIN
create table AUDIT_14Days as select * from AUDIT where TIMESTAMP >= (SYSDATE - dateNum);
truncate table AUDIT drop storage;
[code]....
Scenario:
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
I am getting the file using CLIENT_GET_FILE_NAME. I need to read the data from the .xsl file and convert it into blob. The file should not be stored in DB.
View 8 Replies View RelatedI have a PL/SQL procedure which gathers data from multiple places as well as calculates some data. I want to store all this in a materialized view.So, I created an object type (I've shortened the definitions):
CREATE OR REPLACE TYPE mf_record_type AS OBJECT
(identifier VARCHAR2(6),
name VARCHAR2(100));
Then created the table type of the object:
CREATE OR REPLACE TYPE mf_table_type IS TABLE OF mf_record_type;
Then in the stored procedure defined a variable of the table type:
v_mf_record mf_table_type := mf_table_type();
Then I loop and populate the record type:
v_mf_record.EXTEND(1);
v_mf_record(x) := mf_record_type(v_rec.identifier, v_mf_detail.name);
When all that is done I try and create the materialized view:
EXECUTE IMMEDIATE ('CREATE MATERIALIZED VIEW mf_snapshot_mv AS
SELECT * FROM TABLE (CAST (v_mf_record AS mf_table_type))');
ORA-00904: "V_MF_RECORD": invalid identifier
Am I doing something wrong here? Can't I create the materialized view based on something other than a physical table?
I'm using few cursors to update my data and store it back to the same table. But for some reason the cursor seems to be picking obsolete data.
cursor c1
is
select distinct roles
from table1
where flag = '1';
cursor c2
is
select distinct roles
from table1
where flag = '1';
begin
for i in c1
loop
here im updating the roles im picking to to a suffix and role.
update table1
set role = suffix_role
where 'some condition';
end loop;
commit; -- committing so changes are visible for my next cursor.
for j in c2
loop
here im deleting all the roles that are not part of my comparing table.
delete from table1
where role = 'something';
But in my debugging table i see that its deleting roles present as input for first cursor, whereas it should actually pick data with suffix_roles.
I have a Data entry form which is a multirecord block;
Question: for example that form has 10 to 25 fields or columns more than that all the data has been entered, but before committing or saving that form i need to cross check the the data with a select query, whether the data entered is correct or not but before committing, that data it should be posted into that table if i find that one data is entered wrongly then i will modify that and again cross check and save the transaction permanently into the database table?
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
CREATE TABLE "APP"."MON_TXNS" ( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE, "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE, "ID_PAYER" NUMBER(12,0), "ID_PAYER_PI" NUMBER(12,0), "ID_PAYEE" NUMBER(12,0), "ID_PAYEE_PI" NUMBER(12,0), "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE, "STR_TEXT" VARCHAR2(60 CHAR), "DAT_MERCHANT_TIMESTAMP" DATE, "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE), "DAT_EXPIRATION" DATE, "DAT_CREATION" DATE, "STR_USER_CREATION" VARCHAR2(30 CHAR), "DAT_LAST_UPDATE"
[Code]...
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for 2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained. SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2798378986
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
-------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.