I have a production DB that has the oracle tuning pack option turned on, I occasionally need to clone the DB to a test server where I do not use the tuning pack, If I select from DBA_feature_usage_statistics it tells me that the Options pack is turned on for the Prod db DBID but not for the cloned db dbid. Do I need to worry about this old data being left around in my database? If so is there anyway to remove it? I am running 10.2.0.4
I am currently in the process of trying to upgrade a 10.2.0.4 database to 11.2.0.2. I am using DBUA to complete this work. I have a small issue which I cant work out and cant find anything on Google or oracle docs about it.
The home where the software for 11.2.0.2 is an ORACLE_HOME that was recently cloned from another 11g home. When im in the DBUA summary page before I kick off the upgrade the summary page lists the "TARGET" oracle home as the old 11g home and not the new "CLONED" one.
Obviously DBUA is picking up the target ORACLE_HOME from a file in the cloned home that has not been updated with the new CLONED home info. I have made sure all variables are set correctly including the PATH variable.
I wanted to understand the concept of licensing in Oracle 11g. We have 1 licensed version of Oracle 11g which is used for production, I want to put up a test instance of Oracle DB also but am told that we have only 1 license.
whether having an unlicensed copy could affect our audit process or any other consequences.
This question is around the minefield of Oracle Licensing.
For our non-production systems, we have a set of named user licenses. We have an application that stores embedded links to a range of discoverer viewer reports, this reports are accessed via a public connection (so one user is held in the OAS Discoverer settings).
My question is, does this count as 1 named user (the single "Proxy" user held in the OAS mid tier) or does the fact that many different application users could be accessing the reports (albeit through one OAS user) take up many named user licenses?
I have the task to migrate the total databases(Exact copy to be moved to another server).The current server is going for format.After I did the following steps I am getting the tablespaces(databases)-4 sizes same ,but I am facing issue like some default tablespaces i.e temp,system are not matching.
temp tablespace *************** current server - 4.0(approximately) Migrating server - 160 MB
System tablespace ***************** current server - 580 MB Migrating server - 220 MB
Also I checked the tables are also matching for the 4 databases.Also Provide the solution or method which is correct.
steps done for migrating(By me) ******************************** EXPORTING DATA USING DATAPUMP *********************************
1 From command prompt MKDIR 'c:oraclexeapp mp';
2 From SQL prompt conn system/kotak;
3 create or replace directory dmpdir as 'c:oraclexeapp mp';
4 grant read,write on directory dmpdir to kotak;
5 From command prompt
expdp system/kotak@xe full=Y directory=dmpdir dumpfile=xe.dmp logfile=expdpxe.log; IMPORTING DATA USING DATAPUMP ***************************** in another server machine
1 From SQL prompt conn system/kotak;
2 create or replace directory dmpdir as 'c:oraclexeapp mp';
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
I had one of my RAC nodes go down due to a disk failure. I have a 3 node cluster running 10.2.0.4 on Dell 610's running Windows Server 2008. I have been running AWR reports this afternoon and am seeing CPU time as my top timed event. Here is the exerpt from the report I am looking at:
Cache Sizes ~~~~~~~~~~~ Begin End ---------- ---------- Buffer Cache: 20,032M 20,032M Std Block Size: 8K Shared Pool Size: 12,688M 12,688M Log Buffer: 6,336K
[code]...
I wanted to ask if there was anything that I could be doing to alleviate the workload on the 2 remaining nodes right now? As far as I understand it there is no way to stop users from hitting the database and without my 3rd node to load balance the CPU will continue to be pegged until the end of the day as the users are logging off.
steps to export/import a database with options with exp/imp command.i want to acess the exported database table with my current user schema, means actually i had exported a database earlier but i'm not able to access tables directly rather i've to use exported database schema like abc.tablename.
We have a table with huge data which is skewed on a 'status' column. The 'status' column has 6 distinct values with 1 particular value occupying 80-85% records.
In the batch process we query the data on the status and process the retrieved records. My senior is insisting on partitioning which I see not much feasible considering cost implications just for a part of functionality
See there are 6 status 'A','B','C','D','E','F'
with 'A' occupying 80% records 'B' to 'F' occupies 2% till 14% records in the table(approx)
1) Create a conditional index on status (using case) to have records with all statuses except 'A' Then create If-ELSE structure
IF input parameter is 'A' select /*+ FULL Parallel(t) */ * from t where status='A'; ELSE Select /*+ INDEX (t conditional_index) */ * from t where status in ('B','C'); END IF;
I want to create conditional index here for 2 reasons
1] since it will have values for status except 'A' this nullify the chance that this index will be picked up when status='A' will be queried Thus making the performance worst (status ='A' is for 80% records) - The IF-ELSE is additional protection 2] Less impact on the DMLS as the index will not be on status='A' which contribute to large chunk of records
2)Populate a dummy table which would contain rowid and status. Since the business closes at 21:00 and batch process starts at 21:30 Between these times periods refresh the dummy table every day using merge (to catch business transactions during the day)
Now during the batch process retrieve records from the main table using the rowids in the dummy table depending on the input status value
3)Create index on status Make sure hard coded status values are used in the database procedures Gather stats with the histograms And leave it to the Optimizer to choose the best possible path
The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
Creating thread so we can share experience on using the "Code review options" that comes along with "Toad formatter".
Personally I have used it only for formatting code but not to enforce SQL Standards. any inputs on this and if it's advisable to use this as corporate standard.
we have table with 4 clob fields in it.to load text file of 4GB into the table it is taking around 2 hours. volumetric of that file is 40 Million. we are using direct=Y in sqlldr. but because of this clob fields we didn't got any performance improvement.
I have a master detail based transaction form, i want to create three check boxes on header block, based on selection sorting must happen on detail block which has three fields, for example there are three fields item_code,item_name,item_qty, if user selects first check box then sorting will be on item_code, if he presses second then sorting should be based on item_name, likewise if he presses check box three then sorting will be based on qty, if choses two or more fields then sorting will according to that combined order.
how to use checkbox item, and trying to get checked options values via application global arrays. So, this may be quite simple question, but I'm completely stuck here...
When I was looking through various threads and guides, I've encountered checkbox corresponding array names like "g_f01" - "g_f50". And so far i saw that these names are derived from item name in generated HTML code, for example:
<input type="checkbox" name="*f10*" value="3" />
And this one stands for array name "g_*f10*".However, when I tried to do the same thing - i receive item name which looks like "*p_v04*", and therefore, I can't figure out, which array name should I choose to adress it properly.
In my company we have few 10.2.0.2 databases and few 10.2.0.4 databases.Currently we are monitoring using OEM 10g (10.1.0.5.0) installed on a seperate server.
Planning to install OEM on a new server as part of server room renovation. let me know the process of installing OEM on a different server without loosing the historical data?Also wondering if the OEM 11g will support the current databases?
We are getting a consultant to upgrade an Oracle 9i installation to 11g R2. The current installation has 6 different databases installed on the same server. Each database is a different customer so for reasons of security we have requested that this be split into 6 virtual machines with one database per machine.
The consultant suggested that they could install the 11g database once and then just make copies (which would all have the same instance name. We are told that the TNS names can be configured so clients are directed to the right database.
I have cloned the database into another using DBMS_METADTA API (export the metadata in xml form and recreate it on destination). I need to synch these two periodically. I need to update the XML to synchronize the databases.
As part of our project, we need to perform table comparisons in two different databases. I am currently looking for various options to accomplish this.
One of them is doing minus operation between these two tables. Also, i have looked at the data compare option in toad utility.
I also have a query that queries a SQL Server database:
Select Agent, SUM([acd calls which have rung the agent])As CallsRung FROM Dashboard_stats Where Date = DATEADD(DAY,DATEDIFF(DAY,'20000101',GETDATE())-1,'20000101') Group by Agent
I have 2 databases( A and .. which shares common ORACLE_HOME....
i configured 2 listeners through netca in different ports....
and both listener names are different...
But when I stop LISTENER.. both the listeners are getting stop and when i start both the listeners are up and running.. and I could see database B's services.. and vice versa...and when i tnsping , i am able to see port 1521 default port for database A.
when I run I bash, i have included ORACLE_HOME path... so should i add oracle_SID also there..if so, which SID should I add A or B?
I've just found out that 12cR1 will not (in all likelihood) allow "flashback database" for pluggable DBs. Am I the only one disappointed by that ? I use flashback db (+replay) a lot to revert and replay automated tests and I had plans to consolidate tenths of test environments into PDBs.