(1) Does imp utility rebuild the indexes while loading data ? or it simply takes the rows from dump and load into test system without building from scratch ?
(2) I am trying to replace 'exp' and 'imp' with datapump utilities ? But, I am confused about the parameters to be used ?
(a) Can I load both data and meta data at the same time (Using CONTENT=ALL option) ?
(b) I am planning to implement this in two steps :
first load only metadata using - CONTENT=METADATA_ONLY TABLE_EXISTS_ACTION=REPLACE
I got an assignment to create Oracle 11g db. I will be provided the full datapump export dump of an Oracle 10g db in linux. I need to import it to 11g Database in Windows. I have no information about the tablespaces, users etc I have created db with system,sysaux,undotbs temp and users tablespaces.
As a part of our back up we used to export the production data every day using Original export Utility but from 11g original export Utility is de supported and also datapump doesn't support XML Objects so is there any other way to export the full database else any option to export xml Object using datapump.
We have three instances on the RAC, When I create a directory is there a default instance that it gets created on? When I execute an datapump export on instance 1, it works but on 2 and 3 it fails with error relating to directory.
Error - ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation
have a requirement to load .dmp files into existing staging tables and there is package to load the ODS tables from staging.So,I thought of using DBMS_Datapump utility to import the data from .DMP files to the Tables and this need be automated.
--Create Directory CREATE OR REPLACE DIRECTORY test_dir AS 'C:Test'
--grant Access to the User GRANT READ, WRITE ON DIRECTORY test_dir TO scott;
--Script to import DECLARE l_dp_handle1 NUMBER; BEGIN l_dp_handle1 := dbms_datapump.OPEN(operation => 'IMPORT',
[code]...
Errors
ERROR at line 1: ORA-31634: job already exists ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 938 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4590 ORA-06512: at line 4
I'm trying to deploy the schema using DATAPUMP API. The user from which the schema get deployed has the direct privilege of CREATE USER (not through role). But got the insufficient privileges error.
Processing object type SCHEMA_EXPORT/USER ORA-31685: Object type USER:"SCOTT1" failed due to insufficient privileges. Failing sql is: CREATE USER "SCOTT1" IDENTIFIED BY VALUES '4EBB0DDE3C79FE47' DEFAULT TABLESPACE "USERS" TEMPORARY TABLESPACE "TEMP2" PROFILE "APP_PROFILE".
But the user get created successfully when run the CREATE statement manually.I have created the user manually and again run the deployment procedure. Got the below error for ROLE_GRANTS.
Processing object type SCHEMA_EXPORT/ROLE_GRANT ORA-39083: Object type ROLE_GRANT failed to create with error: ORA-01932: ADMIN option not granted for role 'EXP_FULL_DATABASE' Failing sql is: GRANT "EXP_FULL_DATABASE" TO "SCOTT1"
The user has EXP_FULL_DATABASE with ADMIN Option and IMP_FULL_DATABASE with ADMIN option direct privileges.which privileges the user needs to deploy the schema successfully?
I am using Datapump import using database link to import an entire schema from another Server but it gives issues with constraints.I tried to first only import the metadata and then disable the constraints and import data and enable constraint but in this case the temp tablespace keeps filling up and i am out of space. Is there any method to do a full import including constraints and indexes.
I have analyzed that, datapump estimation is 9.902GB. When i check size of .dmp file, it's shows 1.44Gb.
Export: Release 11.2.0.1.0 - Production on Fri Apr 5 02:00:05 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** dumpfile=expdp_LVGITRN_30_24_050413.dmp directory=DP_DIR logfile=expdp_LVGITRN_30_24_050413.log full=y exclude=statistics Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Total estimation using BLOCKS method: [bold]9.902 GB [/bold]
Expdp directory=xxx.dmp dumpfile=aaa.dmp logfile=xxx.log FULL=Y : :: : : :: : : : ; Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 24.87 MB Processing object type SCHEMA_EXPORT/USER
[code]...
then my export hangs..... checked in alert log nothing found.and then killed the job and reran again but same....checked the status and it's saying EXECUTING.
I have a problem with DBMS_DATAPUMP.metadata_filter.Let's suppose that I need to export a huge list of tables (a,b,c,d,e,f,g,h,i...). Let's suppose that the list is dynamic do NOT want to use
DBMS_DATAPUMP.metadata_filter (handle => h1, NAME => 'NAME_EXPR', VALUE => 'IN (''a'', ''b'', ...)', object_type => NULL);
DBMS_DATAPUMP.metadata_filter (handle => h1, NAME => 'NAME_EXPR', VALUE => 'IN (SELECT a.export_object_name FROM my_export_table a, user_objects b WHERE a.export_object_name = b.object_name AND b.object_type = ''TABLE'')', object_type => NULL );
but it results in error.
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB [] ORA-00942: table or view does not exist
I am importing the dump.All the tables are getting imported but when it comes to the part as shown in the error it gets struck there and nothing is showed after.
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
The dumps for which I am doing the import is huge. Its around 46 GB.
select sl.sid, sl.serial#, sl.sofar, sl.totalwork from v$session_longops sl, v$session s, dba_datapump_sessions d where s.saddr = d.saddr and s.sid = sl.sid and s.serial# = sl.serial# order by start_time;
It showing the lock for two sid's. Should I kill the process and import the dump again? I used below command to import dums
have 2 Oracle server. One with Suse Linux (Oracle 10.2.0.4.0) and one with Windows 2003 Server x64 (Oracle 11.2.0.1.0). I made a mistake by installing Oracle 11 x32 on a x64 server. Nevertheless it works for about half a year. Then my backups with datapump don't work. I changed the SGA_TARGET down to 1024M and the datapump works again. Now I want to renew the Windows server and want to import the datapump schemes to the Linux server.
I'm about to build a machine whos prinary focus will be running a 'hobby' app that uses oracle express as its db. some info on the db..main tabl has 30 fields and 500K records,all other tables add up to about 20% of that size.
I run huge queries that return most of the db in about 200K rows with 50 fields.these queies contain many 'partion fields' using 'rank', 'percect rank' etc. My old pc really struggles with thhis stuff and im going to invest about 600 euro (exculing monitor etc) in a new mahcine.
general terms what i should get, im thinking about OS, hard drive (1 for data, 1 for indexs?), mutli core processer?, type and size of ram. Any othr consideration i should have.
I have NO experience with APEX but - have been browsing the documentation in considering it's use as a web application/dashboard reporting utility.
However - I'm wondering if it allows for the ability to connect to DB2 as well as Oracle...
i.e. - building a form that allows a user to 'pull' data for analysis from a DB2 instance and - generate reporting results that would then be stored in an Oracle instance.
I'm using Apex4.1 on a hosted platform. I'm trying to build a business application and the client wants a Dashboard.
Here is the best way I can explain it:
The dashboard displays a series of rows in the table. When you click on the "edit" button, it runs a query and displays it data on a report. How would I build this?
Let me explain it a different way:
The home page shows a table report with Column 1, Column 2, Column 3.
I want to make it so where you can click any ROW and then it goes to a different page that shows all of the data So the Home Page you can just see 3 Columns, but the next page will have all the columns.
I am trying to build a data warehouse for Consumer Price Index and so I have downloaded data from the Bureau of Statistics.It is in excel format and since I am working with Oracle Warehouse Builder I have converted it to .csv file so that I can use it as a data source.
Question1: Is it practical to use single .csv file as a source of data for a data warehouse?
Question2: I have 3 dimensions tables and a fact table.The dimensions are one for the Region(as the date is organized in region,states etc),two is the consumer goods and services (as the data is organized in groups of goods and services, services/goods types) and finally time(year and month),
Now how am I going to do the mapping here?Is it possible to do a one to one mapping here as all data required by the dimensions is located in the .csv file.
I have been trying to figure out how to write a query that shows each building code building name and number of rooms from a database with four tables : emp, build, room, roombook
i'm working in an Oracle 10g database on an IBM AIX server.
I have 3 tables (tables A, B and C).
Table A has columns -- product, rate and expiration date.
Table B has columns -- product, rate and deductible.
Table C has columns -- product, rider, gender, age and rate.
I also have a Master table which is used to store the data from Tables A, B and C via the insert statement.
I'm trying to create a dynamic SQL insert statement using a shell script to insert data from the columns in Tables A, B and C into my Master table. Master table does contains all columns from Tables A, B and C, although a column name could be spelled differently. For example, Master table contains a column named "deduct", while Table B has the same column spelled as "deductible".
I build the dynamic query using a for loop in my shell script (see below).
The problem is that i can't get the correct columns in the Master table in the dynamic SQL for the insert because depending on the table i'm selection from, the columns are different. So how do i get the correct columns in the SQL for the Master table?
Example Shell Script
--Archive_Rates.txt contains: Table A, Table B, Table C (but the next time my process runs, Archive_Rates might contain Table D, Table E and Table F -- each which have different column...but all columns are still in the Master table)
for tbl in `more Archive_Rates.txt` do echo 'BEGIN WORK; ' > rc1.sql echo ' ' >> rc1.sql echo 'insert into Master' >> rc1.sql echo '(prod, rate, rate_exp) ' >> rc1.sql
I have 3 tables, user_login_event, person and resource_viewed_event. What I want to do have a report for each month, users logged in our application and then show for each month, how many records were created in table person and how many resource views events were logged in resource_viewed_event.
Lets only worry about the timestamp fields in these tables now as I want to use them to join the tables together or at least build correlated subqueries along the months. I have tried several options, all not leading to a desired result:
Left outer join. Works but its incredibly slow:
SELECT distinct to_char(ule.TIMESTAMP,'YYYY-MM') as "YYYY-MM", count(distinct ule.id) as "User Logins", count(distinct ule.user_id) as "Users logged on", count(distinct p2.id) as "Existing Users", count(distinct p1.id) as "New Users", count(distinct r1.id) as "Resources created"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select distinct ule.month as "Month", count(distinct p1.user_id) as "Users created", count (ule.id) as "Logins", count (distinct ule.user_id) as "Users logged in", count(rv.id) as "Resource Views", count(distinct rv.resource_id) as "Resources Viewed"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select distinct ule.month as "Month", count(distinct p1.user_id) as "Users created", count (ule.id) as "Logins", count (distinct ule.user_id) as "Users logged in", count(rv.id) as "Resource Views", count(distinct rv.resource_id) as "Resources Viewed"
[code]....
another approach is to create my own temporary tables using select statements and create fixed Month values which I can use to directly link the sets together.
select distinct ule.loginday as "Month", count(distinct ule.id) as "Logins", count(distinct ule.user_id) as "Users logged in", count(distinct p1.user_id) as "Users created", count(distinct p2.user_id) as "Existing users1"
[code]....
performance is OK with 2 tables but the example above takes forever to execute.
Tried an approach with union but this creates new rows for each table
SELECT DISTINCT p1.MONTH AS "Month", COUNT(DISTINCT p1.user_id) AS "Users created", NULL AS "Logins", NULL AS "Users Logged in", NULL AS "Resource views", NULL AS "Resources viewed" FROM (SELECT To_char(person.created_on_date, 'YYYY-MM') AS MONTH,
I want to load lakhs of records into a table. My problem is when after loading the ΒΌ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?