im trying to migrate Sybase 12.5 to oracle 11G R2 on linux all tables are fine expect one which fails with error
"SQL Error: ORA-01841: (full) year must be between -4713 and +9999, and not be 0 01841. 00000 - "(full) year must be between -4713 and +9999, and not be 0" *Cause: Illegal year entered *Action: Input year in the specified range"
i have change the data type from CHAR to VARCHAR2 etc as for the datatype date shows as date on the tool - I'm using sqldeveloper from oracle.
I have a requirement to import text files which are generated from 3d modelling software xsteel where it records all geometric information and i want to import this information into oracle table.
There is a requirement to make a table data in a database (eg: HR database) available in another database (eg: EMP database), instead of accessing it using database link. In EMP database(where data needs to be cloned), data will only be queried and no write operation will be done. Data in remote database (eg: HR DATABASE) will be occassionally fully truncated and reinserted. The plan is to do a similar truncate and reinsert of data (from HR database) into EMP database monthly once using dbms scheduler job. So basically data in just one table needs to be cloned in another database.
Question: For this situation, is a regular table or Materialized view the right choice to clone the table in EMP database and why? The table in HR database (remote database) is not very big.
select * from test_1 IDNameTotal ----------- 1A100 2B100 3C100 4D100
test_2 table contains the concatination of ID's with comma seperated. Actually in this table ID column is of datatype varchar2. select * from test_2 ID ---- 1,2,3
My requirement is to select the data from test_1 table where the id values in this table exists in test_2 table. I tried with the belowselect statement, but could not get any data.
SELECT * FROM test_1 WHERE to_char(id) IN (SELECT id FROM test_2)
create table test_1 (id number, name varchar2(100), total number) create table test_2(id varchar2(100)) insert into test_1 values (1,'A',100) insert into test_1 values (2,'B',100) insert into test_1 values (3,'C',100) insert into test_1 values (4,'D',100)
I need to insert data in Table A from Table B where most of the fields are identical and might some of the fields will be more in Table A.
ex: Table A: a,b,c,d,e,f Table B: a.b,c,g,h
How to insert this using user_tab_columns in cursor and if I am giving the i/P as my table names . This needs to be configurable and reusable rather i mention all the fields in my logic.
I am trying to join column names from a table with data from a different table. I think i should be able to pass the parameter to a 'select list' in a query. Look at my sample data below. And the data in sales table can grow till 15 rows and similarly corresponding columns in saleshist.
SQL> select SEGMENT_NAME, SEGMENT_TYPE, BLOCKS, EXTENTS, BYTES/1024 2 from user_segments 3 where SEGMENT_TYPE='TABLE' 4 ORDER BY SEGMENT_NAMe; [code]....
The countries table is missing in user_segments data dictionary view.But I can queries the countries using select statement.
SQL> SELECT * FROM COUNTRIES;
CO COUNTRY_NAME REGION_ID -- ---------------------------------------- ---------- AR Argentina 2 AU Australia 3 BE Belgium 1 BR Brazil 2 [code]...
why the country table is missing in user_segments data dictionary table.
We need to load data from index by table to table.Below code is working fine.
declare query varchar2(200); Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer; rec_1 l_emp; begin
[Code]....
But data from source table and target table is dynamic.Ex:In above code, emp(source) and target table is emp_b are static. But for our scenario is depends on the source table , target would change as below.If source is emp then target is emp_bIf source is emp1 then target is emp_b1 ............
create or replace procedure p(source in varchar2, target in varchar2) as query varchar2(200); source varchar2(200); Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer; rec_1 l_emp;
I need to move data from non-partinioned table to partitioned. The volume is about 60 millions rows. What is the fastest way to do that? I think about pareller insert and nologging. What do you think about this? May data pump be faster?
I have a table called Daily_usage (only 1 day data) which contains daily transaction records. After a day I have to move this data to another table named Daily_30days_usage table. this table contained the 30 days data. After 30 days the 31st day data should be deleted from Daily_30days_usage table.
How can I implement this requirement without INSERT statement?
We have our current production running on 9i and eventually want to migrate to 11g-r2. But the challenges are as follows:
9i production is running in San Francisco data center. 11g-r2 Production already setup up and running in Atlanta data center.
The database size is around 2 TB running on 9i. We are looking to transfer this to 11g-r2 and wondering, what options we have at our disposal: I was looking into EXP/IMP, but somebody said, dblinks will be much faster and reliable than EXP/IMP.
we're planning a data migration from an application (oracle-based) to another (also with oracle db).
the origin is a ca. 80 GB database. so lots of millions of records are to be migrated. (before loading records into the destination tables, they have to be transformed).
the current concept is to receive all origin data in xml files, load them in a staging area (an own migration scheme in oracle), transform and load them into the destination tables.
we have three days for the whole migration (including extract from origin database, transform, load, backup after completion...).
my question is, that a migration with xml-files is a good concept. i think xml processing is much slower than doing the same with csv files. my proposal to migrate an oracle dump (so we got the original data in our staging area) was declined.
is migration mass-data with xml files good or are there performance or other issues?
I have created a new table, so that instead of having 26 columns for payment amount for each week, I want to have one column for pay amount and one column to represent the week as below
employee_id payroll_pay payroll_pay_week
how do I migrate the data from old table to the new table?
I need to migrate data from Mysql database to Oracle11g database.
a) is there any method available to import the all the sqls like table script,constraint scripts,data(insert ) script from Mysql.so that we can apply the sql directly to the oracle schema after making necessary changes(like datatype).
b) Is there any free tool available for the migration.
I need to generate in report using PL/SQL code for counting the number of rows of all the tables from source and target database..The report should consist of following columns..
table name|source table row count|target table row count|mismatch|..provide me the PL/SQL code?
I got an assignment to create Oracle 11g db. I will be provided the full datapump export dump of an Oracle 10g db in linux. I need to import it to 11g Database in Windows. I have no information about the tablespaces, users etc I have created db with system,sysaux,undotbs temp and users tablespaces.
We are planning to migrate data from an application called clintrace to another application called argus safety. Both the applications are related to pharmacovigilence safety operation. Both the applications functionality are similar. So both the database are having the same data though the table structures might be different. Both the database are oracle clintrace db is 9i and argus db is 11g.
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue. 2) I tried increasing the a.large_pool_size set to 512M b.sga_max_size set to 8G c.sga_target set to 8G from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,1,blk * End MDD root info (do not edit) set noexec_user_stack=1 set noexec_user_stack_log=1 * IBMdpo vpath_START (do not remove) * default SCSI timeout is 60 seconds * uncomment to change SCSI timeout * set sd:sd_io_time=0x1e forceload: drv/vpathdd * IBMdpo vpath_END (do not remove)
set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
I have Arabic data stored into below two encodings in oracle AL32UTF8 database
1 Million rows into WE8MSWIN1252 .5 million rows into AR8MSWIN1256
in all cases I like to convert 1 Million row of WE8MSWIN1252 into AR8MSWIN1256. I could convert the data encoding from 1252 to 1256 using SQLdeveloper. But no luck using oracle export/import utility (both exp and expdp)…. I’m thinking may be certain locale is required for export/import to work.
Also my company said SQL developer is free utility may not be supported by oracle so use export and import for this, I need to convert only one table.