Resetting Passwords Of SYSMAN And DBSNMP Will Impact To DB?
Aug 28, 2013
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit
PL/SQL Release 11.1.0.6.0 - CORE 11.1.0.6.0
TNS for Linux: Version 11.1.0.6.0 - NLSRTL Version 11.1.0.6.0
When I checked the status of $emctl status dbconsole. I got this error OC4J Configuration issue.
I was told by the forums to drop and recreate the repos. But I noticed the moment you drop the repos you will loose the passwords of SYSMAN and DBSNMP. emca -deconfig dbcontrol db -repos dropemca -config dbcontrol db -repos createI tried default passwords does not worked. it looks like the only option I have is to reset the passwords. but my main concerns is this impact the DB in generally or its common. or is there any other way we can get the OEM back(I already dropped the repos).
Recently I migrated our Oracle to new machine using exp/imp on schema basis. After import finished I had tons of invalid objects in database. I ran utlrp.sql script and lots of them got validated. Than I recompiled manually in EM those are left but two invalid objects (MGMT_JOB_UI description and body) in SYSMAN schema gave error while recompiling.
Now when I click on any scheduled jobs to edit it or view its schedule, EM throw following error:
X Error jobType - jobType page property expected
I think its related to that invalid package. The errors while compiling the specification are as follow:
Line # = 50 Column # = 1 Error Text = PL/SQL: Declaration ignored Line # = 65 Column # = 9 Error Text = PLS-00201: identifier 'JOBRUNTABLETYPE' must be declared Line # = 88 Column # = 1 Error Text = PL/SQL: Declaration ignored Line # = 107 Column # = 9 Error Text = PLS-00201: identifier 'JOBEXECTABLETYPE' must be declared
The Package specification is as below,
AS---------------------------------------------------------------------------------- type definitionTYPE CURSOR_TYPE IS REF CURSOR;-- Get the targets for this step, if any-- if p_return_display_names is false, return the target's internal nameFUNCTION get_step_targets ( p_step_id NUMBER, p_return_display_names BOOLEAN DEFAULT true ) RETURN SMP_EMD_STRING_ARRAY;-- Get the targets for this step, if any, as a comma separated string-- if p_return_display_names is false, return the target's internal nameFUNCTION get_step_targets_str ( p_step_id NUMBER, p_return_display_names BOOLEAN DEFAULT true ) RETURN VARCHAR2;-- Get the parameters for this job, filter as specified for this jobtypePROCEDURE get_visible_params ( p_job_id RAW, p_exec_id RAW, p_params_out OUT CURSOR_TYPE);-- Get the URI for this uri_use-- see emSDK/job/dtd/UriSource.java for uri_use constantsFUNCTION get_display_uri ( p_job_type IN VARCHAR2, p_uri_use IN
We have over 200 different servers that are running Oracle and every 2 months, we have our passwords expire. Instead of having the DBA's go into every server to sync the passwords I would like to have some sort of way of pushing the encrypted password in the Oracle DB to other databases.
Two things to keep in mind.
1) Every user may not be in every DB so if the user does not exist, the code should not try to update that users password 2) I have all my DB's in a tnsnames.ora file or I can put into an easy to parse file, so I can connect to every DB.
we got more that 50 apex apps, Deploying it through apex console is becoming pain.we won't get sys or schema passwords. etc as databases are controlled by DBA's, We only get apex_public_user password, internal, workspace admin passwords.If there is any need of schema association for any applications/workspace, We login to internal workspace and map schema to workspace. I read couple of blogs which discussed about automating the deployments through sqlplusconnecting as sys or schema user, will not work for us. Want to know if there is anyway I can import deployments without logging into workspace. We already have workspaces and applications. The deployments will be for updating existing application or deploying new application in existing workspaces. Oracle Application Express 4.1.
I am working with an asp.net/vb.net application whose users have oracle accounts to connect to the application.The database is set up to allow three failed login attempts. When connecting directly to the database, the sys.user$ lcount field increments on unsuccessful login attempts, and resets to 0 on successful login attepts. This is what I expect should happen.
However, when logging in through the application, using:
objConnection = New OracleConnection(m_strConnectionString) objConnection.Open()
the lcount field sporatically increments on unsuccessful login attempts (sometimes it does, sometimes it doesn't), and never resets to 0 on successful login attempts. I would have expected it to behave as it does when connecting directly to the database.
Why does the lcount field not reset on a successful login through the application? How can I make it increment on each unsuccessful login attempt and reset on each successful login attempt?
I'm wondering how I can use a counter to number records in a table I'm inserting into. I need the counter to reset based on changing data in my table. For example, I have the following
seq_number name_type 1 Short Name 1 Short Name 1 Short Name 2 Short Name 2 Short Name
I'd like my results to be the following: seq_number name_type 1 Short Name - 1 1 Short Name - 2 1 Short Name - 3 2 Short Name - 1 2 Short Name - 2
I'd like my counter to increment so that I can add a sequence number to the end of my name type, but when my seq_number field changes I'd like to reset and restart my counter.
We have procedure that writes text to an Excel file.
The 10g database is setup for US7ASCII character set. This causes the characters on the Excel spreadsheet to appear strangely. As an example an 'a' would appear with a character similar to '~' on top of the 'a'.
If there anyway of resetting the character set at runtime of the procedure.
My supervisor wants to remove all the archivelogs since it was just a test for 1 year the DB is not actually YET been used. Problem is they want it to be used as soon as possible w/o recreating again the database and just removing some data on tables and removed archive logs. how to safely removed the existing archived logs and create a full backup with archived fresh to sequence 1.
I just found out that the LISTENER I've creating with a password was with "-inherit" settings. Twice I succeeded in doing this, but it seems that it's not working when I tested it. Thus, I opted to get rid of it's listener.bak file and set the password again. Now I just can't stop it because of this error message: "TNS-01169: The listener has not recognized the password".
I've tried every passwords I made, and tried resetting it, but I'm not successful either, it's the same error message.My last option would be to restart the db Server (since the original listener.ora is already intact),
Is there any way could alter the sequence depending on specific condition to start with 1 in the form builder?
i wanna to reset the sequence to 1 whenever the month change.is there any way?
Could i make detail block invisible and whenever button pressed i will display the block? I want to put the where clause for that block in the button and if the button did not pressed i want to display nothing in the detail block?
One of my clients need to remove three(of four) CPU to comply the licensing agreement with Oracle.
To avoid problems and also to list the possible problems that removing the CPU can bring, I wish to make a survey of the possible impacts, especially in performance, that removal can cause.
I'm trying to find some information on the performance impact of a trigger on a heavily updated table when the condition to fire the update trigger is NOT met. In other words I guess what I'm really trying to find out is what the performance impact of the system checking the condition on the trigger to determine if it should fire or not is.
For example I have a batch job that inserts and updates a table heavily, but the batch job almost never updates the column in question on the trigger to the value that would cause it to fire, but it does update that column to other values often.
I know about the many downsides of using triggers in general, but I'm working with a third party application, so more optimal solutions aren't an option.
in my oracle enterprise manager under " user i/o " .i am having basically four category:
if we rank them out of ten it would be like : read by other session 2/10 db file scattered read 1/10 direct path read .5/10 db file sequential read 6.5/10
and all these are comming for 2 tables involved for almost all time .some way to handle "read by other session" and " db file sequential read " .
i am rebuilding indexes of these involved table once in 10days and statistics for the these tables are collected every day using "analyze table "xxx" compute statistics."tell me the indepth approach i should take to minimize the impact as users are complaing for performance.
I am attempting to read from the maillog of our server, but I wish to make as few changes as possible for fear of blocking other systems access to the file.
I was initially going to call create directory maillogs as '/var/log/maillog' and then drop directory maillogs; when I was done but I found my user does not have "create any directory" permissions.
Rather than compromise security of the existing database configuration, I thought I would permanently add the maillogs to the list of available data directories. Are there any implications to the filesystem if I do this, or should I be able to add this without consideration of affects.
Understand that I will only be opening the file for (R) READ TEXT access only.
Primarily I am concerned that Oracle (in the background) will keep a file pointer open or something of that nature that would block other programs from writing to the file even after I close the file pointer. I want to make as little impact as possible to the file system.
my sql query has three tables in from clause so it has two join conditions and one where condition.
account_no is number data type and v_account_no is varchar2() data type
The where clause is :
"where account_no=to_number(v_account_no)" with this condition in my sql query has the cost 392
we just modify the where clause as where v_account_no=to_char(account_no) with this condition in the sql query has the cost 11.
what is impact of this data type conversion and difference between these two "to_number() and to_char()" in performance wise to reduce the cost of query?
I have been used to the consciousness that we should use the minimum length for varchar2 field that can store the data we need manipulate. But recently I was told that it has little impact on performance if we assign a much longer size.
I am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
I am using Oracle 9i and Unix on my system and trying to execute a UNIX shell command through external procedure in C.I created a shared lib (libextproc.so) for the following function.
int sysrun(char *command) { return system(command); }
This function runs fine when caled through a driver function in C, meaning that the shared lib is fine.In PL/SQL, I have used the following method to invoke a UNIX command:- create or replace library shell_lib as '/home/ECETRAonsite/oracle/OraHome1/lib/libextproc.so'; / create or replace function sysrun (syscomm in varchar2) return binary_integer as language C name "sysrun" library shell_lib parameters(syscomm string); /
Now when I call this PL/SQL function to invoke the command, it is run succesfully but does not create the file.
PL/SQL procedure successfully completed.I have verified that the path for 'touch' is correct.Following are my configuration files. listener.ora ------------- LISTENER = (DESCRIPTION_LIST = (DESCRIPTION =
I am using prebuilt MV to perform replication of about 300-400 master tables from one database to another database. I am wondering about the impacts on triggers in general replication.
IS there a general rule to enable/disable a trigger before a refresh.