Datafile In Wrong Location
Jun 18, 2013My production DB has a couple of datafiles that were created in the wrong place, plus they are tiny - 100mb each. What is best way to get rid of them?
View 3 RepliesMy production DB has a couple of datafiles that were created in the wrong place, plus they are tiny - 100mb each. What is best way to get rid of them?
View 3 RepliesI try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
We have taken export expdp backup from prod database (primary database- Data Guard).
1.) Import impdp is very slow 10GB/Hrs on staging database (Data Guard MAXIMUM AVAILBILITY)Since Server configuration, database version and configuration, operating system everything are same as production. No blocking, locking or waiting sessions
2.)import impdp is fast 90GB/Hrs on Test standalone database and this test database is running in NOARCHIVE LOG mode with oracle standard version after that no more difference.
CPU,Memory,network and disk I/O are look normal while importing on both databases.why that much difference on import.
what is difference between "ocrconfig -export location" and "ocrconfig -manualbackup location"
View 1 Replies View RelatedI have database jobs that upload the data in my applications.My problem is while copying the record of one application to another, the format of dates goes wrongFor example:
The date in one column is 01-JAN-1941 but in the other record while copying ot goes as 01-JAN-2041.
I've written the code (see below) and after run I get an error:
ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
I don't know how to find out the wrong value from table. How to find it out?
DECLARE
TYPE rowids IS TABLE OF ROWID;
r1 rowids;
type t_varchar is table of varchar2(50);
n1 t_varchar ;
cursor c1 is select e.rowid rid,msisdn_displayed
from the_table
where contract_id is not null;
BEGIN
OPEN c1;
[code].........
I have a strange problem with query with like and %.
When I run this script:
ALTER SESSION SET NLS_SORT = 'BINARY_CI';
ALTER SESSION SET NLS_COMP = 'LINGUISTIC';
-- drop table test1;
CREATE TABLE TEST1(K1 NVARCHAR2(80));
INSERT INTO TEST1 VALUES ('gsdk');
[code]....
I get this:
K1
ŁFa
ła
Śab <- WRONG
Śrrrb <- WRONG
4 rows selected
When i change datatype in column to varchar2 this code work correct.
The execution plan:
PLAN_TABLE_OUTPUT
SQL_ID d3d64aupz4bb5, child number 2
select * from TEST1 where k1 like N'Ł%'
Plan hash value: 4122059633
Id Operation Name Rows Bytes Cost (%CPU) Time
0 SELECT STATEMENT 2 (100)
* 1 TABLE ACCESS FULL TEST1 1 82 2 (0) 00:00:01
[code]....
Where will i find the bash_profile location in IBM AIX server.
View 2 Replies View RelatedI have one issue My server is in france and it is in french timezone but when I query for sysdate it returns US time.
In '/etc/sysconfig/clock/'
Zone= europe/paris
UTC= true
echo $TZ variable is returning nothing.
sysdate = us time
systimestamp= us time
current_timestamp = french time
current_date = french time
dbtimezone= europe/warsove, sessiontimezone=+2.00( which is also europe timezone offset)
tz_offset(dbtimezone)=+2.00, tz_offset(sessiontimezone)= +2.00 i.e europe
os timezone= europe/paris.
This command "./emctl config agent getTZ" is also returning timezone as europe/paris
Also in "emd.properties" file "agentTZRegion" parameter is set to europe/paris
Oracle version= 11.2.0
Now I don't understand why this sysdate and systimestamp is returning "US time zone" while everything else is returning french time zone.
I would like to need export data to csv file, but I got problems with diacritics.The simply PLSQL looks like:
declare
f utl_file.file_type;
cursor c1 is Select ACTIVITY_SUB_TYPE
from the_table;
begin
[code]...
After run of plsql the record in the csv file looks like "Vypršanie skuš.lehoty kontakt"So there is a problem with that diacritics.
I am using pipelined functions. I've written a few with no problem this one seems to be giving an error when I am using techniques that appear very similar to ones that work.
I am doing this all in a package;
The type definition is;
TYPE SUSPECT_LINKAGES_FAC_RECORD IS RECORD
(
PATIENT_ID TUMOR.TUMOR_PATIENT_ID%TYPE,
CENTRAL_SEQ TUMOR.TUMOR_CENTRAL_SEQ%TYPE,
MP_REVIEW_FLAG NUMBER(1),
FACILITY_FLAG NUMBER
);
The variable definition is;
OUT_REC SUSPECT_LINKAGES_FAC_RECORD;
The line with the error is;
PIPE ROW(OUT_REC);
This is the entire function;
FUNCTION GET_SUSPECT_LINKAGE_FAC_FLAGS RETURN SUSPECT_LINKAGES_TABLE PIPELINED AS
CURSOR CURS_SUSPECT_LINKAGES IS
SELECT * FROM TABLE(TUMOR_UTILITIES.GET_SUSPECT_LINKAGE_FLAGS())
order by 1,2,3 DESC;
TEMP_REC SUSPECT_LINKAGES_RECORD;
MATCH_COUNT NUMBER;
OUT_REC SUSPECT_LINKAGES_FAC_RECORD;
[code].....
I get "PL-00382 expression is of wrong type" on both pipe row (out_rec); lines.
i got this error 'PLS-00382: expression is of wrong type'
--declaration
l_recipe_detail_tbl apps.gmd_recipe_detail.recipe_detail_tbl;
begin
ln_recipe_id := NULL;
[Code].....
I have a strange problem with query with like and %.
When I run this script:
ALTER SESSION SET NLS_SORT = 'BINARY_CI';
ALTER SESSION SET NLS_COMP = 'LINGUISTIC';
-- SELECT * FROM NLS_SESSION_PARAMETERS;
-- drop table test1;
CREATE TABLE TEST1(K1 NVARCHAR2(80));
[code]....
When i change datatype to varchar2 this code work correct.
The execution plan:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID d3d64aupz4bb5, child number 2
-------------------------------------
select * from TEST1 where k1 like N'Ł%'
[code]....
Note - dynamic sampling used for this statement (level=2)
I have a show/hide region of type HTML with empty region source which I use to show/hide a subregion (tabular form)because I don't like the layout of a show/hide region when it is shownthis worked without a problem in APEX 4.1.1
however in 4.2 the tabular form is just a couple of pixels widehave been playing around with grids but can't seem to find the right combination of settingsthe layout is messed up
I put the app on apex.oracle.com :
workspace : xonixrs
login/password : demo/demo
We have a bunch of jobs scheduled using DBMS_JOB (yes, I know I should be using DBMS_SCHEDULER, but we haven't migrated there yet). We are running Oracle 11.2.0.3 on Windows Server 2003 x64.
For example, we have a job that is supposed to run every Wednesday at 20:00. The Interval we have set up is "NEXT_ DAY (TRUNC(SYSDATE), 'WEDNESDAY')+20/24". This has been working as intended. Today (Monday), however, the job kicked off at 11:52. It was the wrong day and the wrong time.
I don't see anything weird in my alert log. Where else should I check to figure out why this job ran today?
JOB LAST_DATE LAST_SEC NEXT_DATE NEXT_SEC INTERVAL WHAT
293 1/7/2013 11:52:46 AM 11:52:46 1/9/2013 8:00:00 PM 20:00:00 NEXT_DAY(TRUNC(SYSDATE), 'WEDNESDAY')+20/24 ACQUISITIONS.WORKLOAD_STATUS_UPDATE_NOTIF;
How can I add new column at the specified location in a oracle table? I am having a flat file from which data is inserted to the table..But initially I don't need that column so I m not having this column in the table but if further i need that column at the particular location then how can I do this..?
View 1 Replies View RelatedI am trying to query a Oracle database table that contains sample records like the one below...
DATE LOCN PROD1 PROD2
09/12/2011 L1 6 2
09/12/2011 L2 3 7
10/12/2011 L1 4 1
10/12/2011 L2 3 3
11/12/2011 L1 2 2
11/12/2011 L2 2 0
12/12/2011 L1 4 1
12/12/2011 L2 5 0
I am trying to use the Oracle Sum() to get a grouping by DATE, LOCATION, SUM(PROD1+PROD2) for DATE periods 10/12/2011 and 11/12/2011. Below is the desired end result.
DATE LOC1 LOC2
10/12/2011 5 6
11/12/2011 4 2
I have an issue trying to execute some queries using a dblink. When i run any query with numeric fields only display 4 digit and int the source database the fields have 5 digit. The dblink work between ans MSSQL database to an Oracle Database
Example:
MSSQL
select cardnumber from card
cardnumber
19121
19122
Oracle (with dblink)
select cardnumber from card@dblink1
cardnumber
1912
1912
I'm trying to call a custom made PL/SQL function in a SQL query. I want to supply the values of the parameters during the query. I can call the function if I "hard code" the parameter values, but when I try to supply them I get the ORA-06553 error.
This call works:
select pkg_tm_import_util.wb_screen_hr_refresh_func('','','','','','','','','','','','') from dual
However, this does not, but should be the same as the call that works:
select pkg_tm_import_util.wb_screen_hr_refresh_func(
''''','||
''''','||
''''','||
''''','||
''''','||
''''','||
''''','||
[code]....
I use SQL to extract data from Quality Center (QC) to excel. I have a field type String. It contains the following values.
1) 1161, 1162, 1163
2) DHM, 162
3) DTH, 163
etc
But when i extract this to excel the data is displayed as
1) 116111621163
2) DHM, 162
3) DTH, 163
The value in the first row is displayed with out commas. How to extract the data as it is in the field?
I have table which contains huge data. around 12 lakhs records. when I use sum function on accountname and docdate it gives wrong value. once I restart the server it gives the correct value. one or two days it gives correct value after that again I get the same problem. If I restart again it gives correct value.
I use Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 64 bit server on Linux.
In when-Validate-Item trigger,I am checking a condition like
if %>=90 and days<=7 then status='Y'
else status='N'
It is working fine in 95% cases But in Some cases it is wrongly updating like even if both conditions are true status ='N'
This is happening in user side on rare occasions so what we do currently is ask the user to delete that line and insert it again and it is working
What I need is how to recreate that scenario which is wrongly updating?
i am getting PLS-00306: wrong number or types of arguments in call to 'SECURITY_AUDIT_DTL_TYPE' error below code.
CREATE OR REPLACE PROCEDURE load_data_audit_trail_dtl
AS
TYPE security_type IS TABLE OF SECUIRTY%ROWTYPE
INDEX BY PLS_INTEGER;
security_type_var security_type;
[code]....
I am getting unexpected issue while fetching records from dba_ts_quotas & DBA_USERS Views.
FYI..
select username,DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users
where username='DWHODS'
USERNAMEDEFAULT_TABLESPACETEMPORARY_TABLESPACE
DWHODS DWHODS_TBS_DATADWHODS_TEMP_DATA
select tablespace_name, username, bytes / 1024 / 1024 "Used MB",
[code].....
View dba_ts_quotas is giving tablespace name "USERS" where DBA_USERS showing default tablespace name "DWHODS_TBS_DATA".
I am running some FORALL...UPDATE statements in a dynamic anonymous blocks and I am seeing this intermittently
ORA-06550: line 10, column 28:
PLS-00382: expression is of wrong type
ORA-06550: line 10, column 17:
[Code].....
...continued from "Problems with full outer join"I think that ANSI joins in Oracle doesn't work correctly. Or am I doing something wrong? The query looks like this:
select nvl(k.id_pers,t.id_pers) id_pers, k.dat_avst
, id_trans, dat_trans
from
( select ka.id_pers, ka.dat_avst, ka.dat_nasta_avst
[code]...
It's a full outer join between one table (with a subquery) and an inline view with two tables You can see the returned rows in the listing below. The query returns one row where there is a match between t_trans and t_kontoavst.It also returns two rows from table t_kontoavst with no correspondence in t_trans.Finally it returns 26 rows from the table t_trans with no correspondence in t_kontoavst. But among them there are many rows in contradiction to the conditions:
and trunc(dat_trans) >= to_date('20040101','yyyymmdd')
and trunc(dat_trans) <= to_date('20050604','yyyymmdd')
Actually it seems to return all the 27 rows in t_trans (one of them joined to t_kontoavst).These conditions are actually not part of the join so I changed it to:
where trunc(dat_trans) >= to_date('20040101','yyyymmdd')
and trunc(dat_trans) <= to_date('20050604','yyyymmdd')
It's not clear to me if this where condition belongs to the joined result or just to the right joined inline view.With this change the correct rows from t_trans where returned but unfortunately the two rows from t_kontoavst with no correspondence in t_transdisappeared. I thought maybe that is because dat_trans is null for these two rows after the join. Therfore I also includeddat_trans to be null. This can only happen when t_trans is missing
where dat_trans is null
or (trunc(dat_trans) >= to_date('20040101','yyyymmdd')
and trunc(dat_trans) <= to_date('20050604','yyyymmdd'))
But that didn't change the result I have also tried right and left joins and it produces similar errors.One other thing i tried was to replace the inline view with one of the underlying tables t_trans. But the result was the same.Data returned from the original query (see query above):
ID_PERSDAT_AVSTID_TRANSDAT_TRANS
1945050505022005-05-011721642005-05-16
194505050502null1723722005-06-16
194505050502null1723732005-07-16
[code]...
Data in the tables.SQL Statement which produced this data:
select id_trans, id_pers, dat_trans
from t_trans
order by id_pers, dat_trans, id_trans
ID_TRANSID_PERSDAT_TRANS
1721641945050505022005-05-16
1723721945050505022005-06-16
1723731945050505022005-07-16
[code]...
SQL Statement which produced this data:
select id_pers, dat_avst
from t_kontoavst
order by id_pers, dat_avst
ID_PERSDAT_AVST
1945050505021997-05-01
1945050505022005-05-01
1958080808071997-05-01
[code]...
I am using the following query with like 'T_%', i am getting 80 rows out of which the first table_name doesn't even have a beginning part 'T_%'.
the first table name has not started with 'T_', why is it appearing.
*********************************************************************
SELECT 'Truncate table epic500.'||table_name
FROM user_tables where table_name like 'T_%' order by table_name;
*********************************************************************
output:
Truncate table epic500.TEMP_ENC_DEL
Truncate table epic500.T_ACCOMMODATION_CODE
I would like to implement Oracle RAC with 2 nodes for SE Licence. I did a lot when both this nodes with 3 NICs each were plugged at the same switch. Now I have a need to construct a RAC when two nodes will be in separate locations, abot 4 miles from each one. What should I explain to our network administrator he needs to do to implement this solution? I've been told that they can do a FO channel to each location. But don't have exact clear explicaton.
View 3 Replies View RelatedI want to upload csv file from share location(another host) & store data in table
View 2 Replies View RelatedI have a master-detail form on which i have 2 buttons. save button and another location button in detail.on location button i am calling a form and updating location of the material entered in detail(tabular).
The thing is my form should not get save without updating location for each record entered in detail(tabular).if user try to save form without updating or pressing location button it should give message PLEASE UPDATE YOUR LOCATION.