Oracle 11g - Exception While Importing Statistics?
Jul 14, 2010
I am using Oracle 11g R2 version.I want to import the DB statistics. But i am getting an exception when i execute the command DBMS_STATS.IMPORT_SCHEMA_STATS ('user1','STATS_INFO', '','', TRUE, FALSE).
The error is ORA-20000: no statistics are imported ORA-06512: at "SYS.DBMS_STATS", line 10603 ORA-06512: at line 1.
The privileges 'ANALYZE ANY' and 'ANALYZE ANY DICTIONARY' is already given to the user.Also i executed this command as sys. But still error occurs.
Same command is successfully executed in Oracle 10g. Is there any difference in importing the statistics in Oracle 10g and 11g ?
I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.
OS: HP UX Itanium Database: Oracle 11g (11.2.0.2)
Command is: exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);
I would like to understand:
1) What could have been the causes of this change in time. 15 minutes to 4+hours ? 2) How can we gather statistics of huge table at a faster rate?
I want to get the stale stats for Table resides at APPS schema. Is there is any table or view exists to get the details like DBA_STALE_STATS or anything? Currently I am checking LAST_ANALYZED column from DBA_TABLES?
APEX 4.2Oracle 11g Database We are using the standard exception handler that was introduced in APEX 4.1, and we have code in packages & procedures in the database (following proper processes of keeping code in the database where possible). When an exception is found in the procedures/ packages/functions, should the APEX application level exception handler catch any errors that occur or should they be handled in the package/procedure/function they occurred in? Why I ask if, we right now have exception handling code in the pl/code bodies BUT they write their errors to the same table that Apex's Exception handler does, but the errors are NOT presented to the user using the APEX exception handling mechanism.
Actually I am using Oracle 9i in Windows Operating System. Now I want to move to my entire Oracle Database to Oracle 11g in Linux version. What are all the correct steps to do this. I am not a DBA.
I have to implement exception handling in the exception block of a trigger, Quote:exception
when ora_java.java_error then message( 'Unable to call out to java, ' || ora_java.last_error ); ORA_JAVA.CLEAR_EXCEPTION;
when ORA_JAVA.EXCEPTION_THROWN then ex := ORA_JAVA.LAST_EXCEPTION; message( Exception_.toString(ex)); -- lv_exception := Exception_.getMessage(ex);
I get an error for the line: 'message( Exception_.toString(ex));'I have imported the java classes FException et IObject with their methods.
I have to create a Web Service Client, so I wonder if the paragraph Quote:when ORA_JAVA.EXCEPTION_THROWN then ex := ORA_ JAVA. LAST_ EXCEPTION; is mandatory.
I have a big problem that came up latly which is importing XML files into oracle database.The point is that I have extracted whole PostgreSQL database into XML files - 236 tables - 1 XML file for every table and now I'm about to import them into Oracle tables. First of all, I would like to point out that I already have the structure of all the tables in oracle database, the files only carry the data (records) that need to be imported into oracle.
I've been trying to make it running and I can't do anything more serious about it for over a week..I will show You all example:
That was one of my attempts to import data from file "ps_sprawozdania.xml" into table "ps_sprawozdnaia" into oracle. Here are 2 records from the XML file to show you Its structure
<NewDataSet> <Cust> <miesiac>7</miesiac> <umowa_rok>2008</umowa_rok> <umowa_nr>051/210412/01/000/08</umowa_nr> <nr_korekty>0</nr_korekty> <nazwa>Sprawozdanie z realizacji umowy nr 051/210412/01/000/08 za miesiąc Lipiec</nazwa> [code]....
I have received a dump, which i need to put on a newly created schema, there is a particular table with more then 4 million rows, and other tables have hardly few thousand rows.
I want to import it in a way where only 1000 rows get imported for this table and other tables do not get affected. Is there a way to do it?
While trying to import a schema using Data Dump, I am facing the following issue - UDI-00018 - Import utility version can not be more recent than the Data Dump server.Following is the version information of the source and target DB and the utilities :
Source DB server : 10.1.0.2.0 Export utility : 10.1.0.2.0 Import utility : 10.1.0.2.0
Target DB server : 10.1.0.2.0 Export utility : 10.2.0.1.0 Import utility : 10.2.0.1.0
v_error_code := SUBSTR(v_error_code||':'||SQLCODE,1,30); For i in 1 ..sql%bulk_exceptions.count LOOP v_error_desc := SUBSTR( SQLERRM(SQL%BULK_EXCEPTIONS(i).ERROR_CODE),1,400); dbms_output.put_line('The value of Error is '|| v_error_code ||' '||v_error_desc); End loop;
[code]...
By executing the above i am getting Error SQL> @pkg_rwc_migration_bdy.sql 130 /
Package body created.
SQL> Begin 2 pkg_rwc_migration.pro_rwc_mig; 3 End; 4 / The value of Error is :-1843 -1843: non-ORACLE exception
I am writing a after trigger for one of my tables on every insert update and delete for my dataware house staging area. The process here is when ever there is a change in the production database we need to capture this change in our changing area through triggers.
I am able to create the triggers but i am stuck with the exception handling portion of the trigger. I want to write an exception in the trigger where when the staging area is locked or for any other matter the data needs to be able to go to a error table when the staging area is not able to accept the data for some reason.
how i can write this excepyion in the trigger or anyother method i can follow to be able to handle this scenerio.
I am trying to search a way to get the SQL statement that caused an exception withing an oracle function. I tried:
SELECT sql_text from v$session ses, v$sql sql where sql.sql_id = ses.prev_sql_id and ses.sid = sys_context('userenv','SID') AND ROWNUM = 1;
but this doesn't always return the last statement that the function has executed. if needed i can send the complete script for the function and its tables and stored procedures for testing.
If a user defined function in Oracle may throw and catch exceptions depending on input, now does the Oracle logs the exception? Fundamentally we are asking if Oracle logs any random DML errors (not bulk operation). We hope not, as we do not want our client DBA get concerned. There might be many levels of settings in Oracle to log or not log. So if Oracle may log it depending on setting, how do we suppress it either at Oracle specific schema level, or function level?
This is my user defined function:
create or replace FUNCTION ISNUMERIC(char_in CHAR) RETURN NUMBER IS n NUMBER ; BEGIN n := TO_NUMBER(char_in); RETURN 1; EXCEPTION WHEN OTHERS THEN RETURN 0; END;
i've created an alert with 2 buttons and the code for cancel button is as follows:
... if alert_id = alert_button2 then raise form_trigger_failure; end if; ...
this code is in a pll and when called from a form it displays a non oracle exception message. How to do nothing when the user presses cancel on alert pop-up? apart from using null instead of raising the above exception?
we are running SAP application against oracle database. say, if I use brspace or brtools (from SAP side) to shutdown or startup database or collect stats, does this mean it not recommend to use oracle command to shutdown/start & collect stats?
load data infile 'trlc.csv' replace into table trlc fields TERMINATED BY '|' TRAILING NULLCOLS (est_no,right_no,maj_auth,weight,idm_ht,c_date,P_tkt)
The rows get inserted successfully. But the result sets are different, for example: When I do a select in SQL Server,'select len(weight) from trlc;' , I get the length as 0. But when I do a select in oracle database, I get the length as 1. Also, the result set varies for the query below:
select * from trlc where weight=' ';
(SQL Server returns 1 row but Oracle returns no rows)
Do I need to mention any conversion code for the weight field to accept ' ' value?
I want to take the statistics of my Production database and import to my local database, to calculate the Production statistics.I have used the statistics=compute, to export statistics. In the log file, for some tables, there was a waring like "exp-00091 exporting questionable statistics"
Is this dump will be useful for me to calculate production statistics?What option i have to use while importing statistics=recalculate or statistics=safe?
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
In the article regarding gathering CBO Statistics, it states: QUOTE When an Oracle database is created, a job will be scheduled that will generate the database statistics for you. You will still need to collect system statistics however, as these are not collected by the automatic statistics gathering mechanism.
what is the difference between "database statistics" and "system statistics"? In other words, do I need to run this script for each schema owner in my 10g/11g instance?
variable whoami varchar2(20); begin select user into :whoami from dual; end; exec dbms_stats.gather_schema_stats( - ownname => :whoami, - options => 'GATHER AUTO', - estimate_percent => 15, - cascade => true).
updating the statistics for a table (with GATER_TABLE_STATS) and using NUM_ROWS then. This works fine for me as long as I am the owner of the table, but when someone else is, I always get this error: ORA-20000: Table does not exist or insufficient privileges.what privileges do I need to use GATHER_ TABLE_ STATS on all Tables, which were created by Users?
when I tried to use ANALYZE TABLE TEST_TABLE COMPUTE STATISTICS on a certain table I got the following error: a view is not appropriate here. The strange thing is, TEST_TABLE is not a view (at least it is not listed in ALL_VIEWS and is listed in ALL_TABLES, so it cant be a view right?).
Besides, is there another way to gather Table Statistics (not using Analyze Table or Gather_Table_Stats)?
When I launch it from command line it stops with error message: "insufficient privileges" and ask me the user, so I put 'PDMUSER' that is my user, it asks also pwd, I put it and the works.
I have to create some indexes in a production database. Do I need to Compute Statistics after creating indexes? Or when I create they automatically are computed?
The version I'm using is:
Oracle Database 10g Release 10.2.0.5.0 - 64bit Production PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for 64-bit Windows: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production
I have created materialized view which hold few million records.Should i have to analyse the view and compute the statistics after i create the materialized view?
Also,just in case i need further indexing,should i have to take the statistics for the table again??
Is it possible to gather stats for a schema which its in use. When i try to analyze the tables of a schema it shows that the statistics for that table is locked. So is it possible that instead of analyzing a table one by one , can i go for gathering the Schema stats while the objects of that Schema is still in use ( like DML or select statements being issued on those schema objects) .
DB version : 10.2.0.4 OS version : RHEL 5.8 DB type : RAC
I have gathered frequency histogram manually on one of my column of a table to provide more information to optimizer for better calculation of cardinality.
Now i have my weekend job runs for gathering stats on schema level with method_opt as 'For all column size repeat'. But i don't want the stats of above column to be overridden by the stats job. I don't want to lock the statistics of whole table, but i just want to lock the column level stats for this table.
I don't know, if this is the intent behavior of oracle or not. But i noticed, my queries Execution plan randomly changes after statistics collection. Several tables are truncated after the daily run at 8AM and statistics gathered for all the tables in that schema.
However execution plans for 2-3 sql statements always changes after this and performance is brought back to normal by executing the procedure by explicitly calling it from the command line with arguments instead of bind variables.