Performance Comparison Between Oracle Sun Sparc 10 And IBM P7
Jul 5, 2012
Provide details about performance comparison between Oracle Sun Sparc 10 and IBM P7 server running oracle 11gR2 for OLTP system.
I know that licensing wise Sun will be much cheaper but i want to know other aspects such as performance, storage, transaction per second. Basically on a nut shell which server to buy for our OLTP system.
Oracle Server 11g on HP-UX Oracle Client on Windows
I am using swingbench tool to generate load on DB and using OLTP like benchmark i am comparing the performance of plain data and encrypted data.
I have created two different database. one for tde and other for plain. I have populated same number of rows in both databases. Then i start running the benchmark and i use SAR to collect disk I/O's, VSAR to CPU usage.
From the sar report it seems that,
Oracle plain has faster transactions, it uses minimum CPU. But when look in tot the Reads/Writes TDE has lower than the plain.
If TDE needs to encrypt the data to store in the disks it should occupy more space than the plain data. Then the I/O should be more in TDE..
Note: Bcz the DB parameters are same, number of rows in the tables are same. File system and its block size are same. I will run the swingbench seperately for both the databases.
I am attaching the excel sheet for sar results. Let me know if you need more information
I am trying to export selective data from one of my prod database tables. But not succeeding. I was keep on trying for the past 2 hours.
OS : SOLARIS SPARC ORACLE - 10G Query --> WHERE E3RECV_DT LIKE '201305%' (I need to export this query data)
Below Script i am using =============== exp E3USER@SGEBAPU2 statistics=none consistent=n buffer=100000000 file=exp_pipe_file TABLES=IFDATA query="WHERE E3RECV_DT LIKE '201305\%'" log=PGTB_IFDATA_conditional.log
comparison of oracle and toad products.I know we have oracle enterprise manager for doing application and enterprise management.I am listing the tools from toad
Can list the tools from oracle which have or perform the same functionality.
Toad Toad Development Suite for Oracle Toad DBA Suite for Oracle Toad for SQL Server Professional Edition Toad for SQL Server Xpert Edition Toad Data Point Pro Edition Toad Data Point Pro Edition- with analytics
After querying the view v$db_transportable_platform, I can not see any information about Solaris SPARC, but only about Solaris x86. Can I migrate database to Solaris SPARC? What can I do to performing RMAN CONVERT?
I need to migrate a 10gR2 Ent Ed. from SPARC 5.9 to SunOS 5.10 x86_64 and then upgrade the database to 11gR2 on the target platform. With the below output I would like know if I have any other option besides the imp/exp.
SQL> /
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------- -------------------------------------------------- -------------- 1 Solaris[tm] OE (32-bit) Big 2 Solaris[tm] OE (64-bit) Big 6 AIX-Based Systems (64-bit) Big 3 HP-UX (64-bit) Big 4 HP-UX IA (64-bit) Big 9 IBM zSeries Based Linux Big 16 Apple Mac OS Big 18 IBM Power Based Linux Big
8 rows selected.
SQL> !uname -a SunOS moloto 5.9 Generic_122300-17 sun4u sparc SUNW,Sun-Fire
Currently I am working on conversion project. We want to verify the both applications updated database values are same or not.
if you run same transaction in both application values are updated in the database. I want compare both database tables, its updated same values in table or any mismatch in the table, is there any tools available right now to compare the two rows values in same tables.
Select to_char(to_date('10-02-2006 10:30:00 AM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') as a1, to_char(to_date('10-02-2006 01:30:00 PM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') as a2, Case when to_char(to_date('10-02-2006 10:30:00 AM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') >
[code]...
from the above query i was expecting value '2' but its returning '1'. As I am using TO_CHAR its trying to compare characters. Is there a way, to compare times alone like less than, greater than?
When we create sql and some pieces of sql are implemented as oracle function. and we run this sql only once every day. I read article that function after first run located in cache. This part of cache ( with function ), is it really consume one oracle resources? Or it will be erased after while.
I need to populate a table based on the results of comparing sets of data. I decided to do this using MULTISET EXCEPT, but having created the structure, do not know whether it is actually possible, and if so, what syntax to use.
I have created:
CREATE OR REPLACE TYPE NUMBER_TBL IS TABLE OF NUMBER; /
CREATE OR REPLACE TYPE PACKAGE_OPTION_GROUP_OBJ AS OBJECT( ID NUMBER, benefits NUMBER_TBL ) /
CREATE OR REPLACE TYPE PACKAGE_OPTION_GROUP_TBL AS table of PACKAGE_OPTION_GROUP_OBJ /
I am expecting the result to be something like: ***comparisonTable*** GROUP_ID : 123 BENEFITS : 165 BENEFITS : 167 ---------------------------
However, I don't know if this is possible and if so, what the syntax would be.
At a later stage, I will need to compare the benefits between selectedTable and groupTable, where the GroupID's match, which is why I have the tables structured in this way.
I ran SPA for SQL workload of around 94 SQLs.In comparison report that is generated by SPA, there is a metrics "Top 94 SQL Sorted by Absolute Value of Change Impact on the Workload".
In this metrics there is a column "Impact on Workload". This column hold a value in percentage.how this is calculated by SPA. What formula is used by SPA to calculate "Impact on Workload".
I am trying to compare the ranges of low pair and high pair,if they are within the range then source_conn_id should remain same,else it should be updated to null,which i had written it in else block.How can I implement the IF block and what to write in that block so that source_conn_id can remain the same.
SQL> CREATE OR REPLACE PROCEDURE fp_complements_src(p_id varchar2,ftr_con_id varchar2) 2 AS 3 BEGIN 4 FOR i IN(SELECT SOURCE_CONN_ID,LOW_PAIR,HIGH_PAIR FROM COMP_TEMP1 WHERE SOURCE_CONN_ID=ftr_con_id)
My primary objective was to compare objects in schemas in two different databases and find out the differences, Execute DDL's in the database where objects are missing and syn schemas in two different databases.
So I need to compare schemas in databases. Which tool will be user friendly to make a comparison of database objects existing in schemas in two different databases.
I'd like to see if I can get a list of pro and cons between Toad and SQL Developer for comparing schemas pros and cons. How to make a comparison. I have some idea on using TOAD but was not familiar with SQL Developer.
Below is my requirement:-
Connect to Source Connect to Target Compare schemas with different object types Find out differences Generate DDL's for the missing objects or for the objects in difference report Run them in missing instace(Source/Target) Make sure both are in sync.
Could it be that it's impossible to change the date format in the default_where clause?
The table column PROPOSAL_END in the database that I want to compare with, is in Format DD.MM.YYYY.
I tried:
set_block_property('Tours' , default_where, 'Number_of_places > 0 AND PROPOSAL_END <= ' || to_char(to_date(sydate,'DD.MM.YYYY'))); set_block_property('Tours' , default_where, 'Number_of_places > 0 AND PROPOSAL_END <= ' || to_char([-- A date item with the intial value $$date$$ the output is in Fomat DD.MM.YYYY by default --])); set_block_property('Tours' , default_where, 'Number_of_places > 0 AND PROPOSAL_END <= ' || to_char(to_date([-- A date item with the intial value $$date$$ the output is in Fomat DD.MM.YYYY by default --],'DD.MM.YYYY')));
It all does dot matter. Every time the generated select-statement shows the format DD-MMM-YY. How can I change that?
I have a below requirement to compare the development and production objects.if any association_type or association_role are not exists in production then i need to return a message like "the Type Object found in Development,but not Production"
Below is the tree structure
development ProcessingSite(Association type1) TreatingSite(role1) MoodedActivity(role2) MaterialName(role3)
production ProcessingSite(Association type1) TreatingSite(role1) MaterialName(role2)
Processing Site is an association_type and it is having 3 association_roles. we can observe same association_type in the production, but Mooded Activity(association_role) is not available. in this case we need to return "Type Object found in Development,but not Production".
We have migrated database data from physical servers to virtual servers. i want to ensure all database parameters are set correctly in both physical and vblock servers. My question is what are all the parameters need to check and compare in both servers to ensure database from both servers ( physical/vblock ) are in sync.
I have written one program with dynamic SQL and piece of code is follows.
sql_stmt := 'SELECT '||CBID(i)||',BID,'||CBEID(i)||',''NA'',''NA'',''NA'' FROM DIM_ORGNISATION WHERE BID in(select PARENT_B_ID from ORG_DIM_LOD where CHILD_B_ID ='||CBID(i)||') and to_Date(start_Date,''DD/MM/YYYY'') = TO_DATE ( trunc('||Cstart_date_type(i)||'),''DD-MON-YY'',''NLS_DATE_LANGUAGE=ENGLISH'')'; EXECUTE IMMEDIATE sql_stmt BULK COLLECT INTO tempBID, tempSBD, tempLBD, tempL3BD, tempL4BD, tempSABD And ,
when i'm executing dynamic SQL gives the error as follows.
ORA-00904: "JAN": invalid identifierORA-06512: at "LWNER.SHY_CREATE_MAPING", line 184ORA-06512: at line 2
when displaying with using
DBMS_OUTPUT DBMS_OUTPUT.PUT_LINE('Cstart_date_type(i)'||Cstart_date_type(i)||)); It's diaplaying it as "01-JAN-70".
I have two tables T1 and T2. T1 is the original backup snapshot for changed records from overnight batch in a big table and T2 is the overnight batch changed records. Both tables have similar number of rows (T2 might have more for newly inserted rows) and you can find out the differences by comparing these two according to action column in T2 (C - Update, A - Insert and D - Delete)
how to compare these two tables to generate something like the following. I can join these two tables to generate the diff but it is one row per account.
client_nbr branch_cd, account_cd, action column, old_value, new_value 8888 123 45678 C account_clsfn_cd 004 005 8888 123 45678 C buy_cd 98 99 8888 012 34546 A sell_cd 12 8888 321 98765 D dividend_cd 1
I am using Oracle 10g so Unpivot cannot be used.
CREATE TABLE T1 ( CLIENT_NBR CHAR(4 BYTE) NOT NULL, BRANCH_CD CHAR(3 BYTE) NOT NULL, ACCOUNT_CD CHAR(5 BYTE) NOT NULL, ACCOUNT_CLSFN_CD CHAR(3 BYTE), SELL_CD CHAR(2 BYTE), BUY_CD CHAR(2 BYTE),
I have written a java code which reads 2 millions of data under a particular column from CSV file and store it into a set. Now there is a table in Oracle database which contains 10 millions of records for that particular column. Now, I want to form a SQL query which select those records under that particular column from the database table which is in CSV file but not in database table. For e.g.
If I consider the CSV file name as employee.csv and it has column called employee_name under which the records are as follows
After upgrade 10g to 11g, the below sql is not working. I have issue with connect by, if we use it with subquery it will hang.
select item_code from bom_list_pos where ln_id in (select ln_id from bom_list_nodes start with ln_id IN (select ln_id from bom_used_work_pack where rownum =1) connect by prior ln_id = parent_ln_id)
I ran 10g, able to get it less than minute, but 11g hang. below is explain plan.
I have installed database in one server. I would like to enable AWR into it. Statistics_level is set to Typical. While running the below script to enable the AWR, its gives error -
SQL> exec dbms_scheduler.enable('GATHER_STATS_JOBS'); BEGIN dbms_scheduler.enable('GATHER_STATS_JOBS'); END;
* ERROR at line 1: ORA-27476: "SYS.GATHER_STATS_JOBS" does not exist ORA-06512: at "SYS.DBMS_ISCHED", line 4343 ORA-06512: at "SYS.DBMS_SCHEDULER", line 2802 ORA-06512: at line 1
We've been battling with very slow performance for some time. Herewith a detail description of the problem:
Solaris-11/ZFS/Oracle problem
We purchased Oracle T4-2 servers, and are experiencing some weird performance problems.
Hardware: T4-2 2 x 600GB HDD per server 128GB memory per server 2 x dual port QLE2562 FO HBAs IBM V7000 StorWyse data array 2 x CISCO MDS 9148 fibre optic fabric switches Software: Solaris 11.1 MPXIO Solaris 10 branded local zones ORACLE 10g Enterprise edition Project for the oracle user: user.oracle:100::oracle::process.max-file-descriptor=(basic,8192,deny);process.max-stack-size=(priv,32768,deny);project.max-shm-memory=(priv,21474836480,deny)
We received the first server and wanted to migrate our APPB application system and Oracle 10g Standard Edition database from our SUN T5240 to the T4-2.
T4-2 setup � Disk 0: Global zone: Solaris 11.1 ( ZFS - whole disk for the root pool) Local zones: On the Solaris 11 environment we built two branded Solaris 10 zones using an Oracle template provided on the Oracle website - solaris-10u11-sparc.bin.
Our complete database resides on the IBM data array, in UFS LUNs.The UFS LUNs were mounted onto ZFS mount-points in the root partition (/), and then LOFS mounted into the zones.
We started with the Solaris 11.1 environment.
1)After a day or two the performance of the database starts deteriorating rapidly. We then stop the database and reboot the machine. After the reboot the performance level is restored. 2)Another huge deterioration in performance happens when we unmount the V7000 LUNs, and reboot to the alternate Solaris, and re-mount the LUNs. 3)What further compounds the issue is that when we start another the database in the second zone, we see another huge performance degradation. 4)We have logged a call with ORACLE. They requested us to gather information which was analysed by them. They did not find anything wrong with the way ORACLE was installed and the setup of the instances.
On Disk 1 we did a Solaris 10 8/11 (Update 10) installation, which we patched with the April 2013 CPU patchset. In this Solaris 10 global zone we built two native Solaris 10 local zones. The Oracle 10g databases were built in the zones (same configurations settings) not in the global zone, onto UFS LUNs. The database in its entirety lives on the IBM V7000 data array. This works fine.
We then received our next T4-2 server.Loaded it again with Solaris 11.1, and upgraded to ORACLE 12c SE Release 12.1.0.1.0 - 64bit12xxx seeing that Oracle 10 is not certified on Solaris 11. To keep things simple, we built two small databases in the ZFS root pool. The complete system now resides on one disk no UFS LUNs to consider, no Fibre Optic fabric, no CISCO switches, no IBM data array, BUT we get the same problem. The system will run for some time and then slow down drastically. Starting the second database slows the system down abnormally.
I have a Problem related to performance and quickness of a customized application
suppose that this application work on 2 clients. Those 2 clients logon to the same database and the same mapping application folder also the 2 PC configuration are the same.
the first client has OS windows XP , the second client has OS Windows 7. I noticed that the the second client (win7) is less performance than the first one (win xp) also I saw the application on the second client is slower than the first client.
I know that Oracle 6i on win7 is dis supported . Also i know i should upgrade to 10g or change the OS to xp. But is this the only way for enhancing the performance or is there another way i can do without changing anything (Patch , etc)
I am running an Oracle 10.2.0.3 on Solaris 5.9 OS. Front end appplication is PeopleSoft v8.8.From my AWR report I have found below SQL which needs to be tuned: