I need to spool data from a remote server using putty(sqlplus) to a local machine. There are credentials i need to give before accessing the remote databases and i am able to do it..i tired with the below query but the spool file(csv or txt) is not able to create on local machine.
set colsep , set pagesize 120 set trimspool on set headsep off set linesize 1000 set numw
spool D: estmyfile.csv select table_name, tablespace_name from all_tables; spool off
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
We are running a store procedure by calling it from shell script.The shell script is running in Server1 and the stored procedure is running in server2(which is the data base server.).
Functionality of the stored procedure:Generating a tab delimited file and place in a directory.
Requirement: The requirement is that the store should be modified in such a way that it should drop the output file to Server1 itself(The server from where it is getting called).
Attaching the store procedure for reference. mIn the store procedure,we have a variable "lv_file_dir" which currently refers to the directory location in Databse server(Server 2) We have to change this SP such that it drops the generated file in Server1 itself(From where the shell script is running).
Attached File(s) Procedure.txt ( 3.28K ) Number of downloads: 4
I have a requirement to read flat text file(around 15000 lines) residing at a client location from DB server and write into a table in One cell.
I tried UTL_FILE and DBMS_LOB but, i am not able to access client location to read the file as it reads path from Oracle Directory.
eg. my client path is 198.168.1.1 and my DB server is in unix say 192.168.1.10. file location is: \192.168.1.1shareabc.txt So I created One Oracle directory as MY_DIR having DIRECTORY_PATH as '\192.168.1.1share'. But both UTL_FILE and DBMS_LOB is not able to access the file.
Error Message: ------------- Unable to process CLOB -22288 ~ ORA-22288: file or LOB operation FILEOPEN failed No such file or directory
Few Details for reference: ------------------------- File Location: \192.168.1.1shareabc.txt Unix DB Server location: 192.168.1.10 Table : Test (filename varchar2(30), Content CLOB) Oracle Dir: MYDIR Directory_Path: \192.168.1.1share
i am working on Database 11g. I Need to download file to local after the file is created in the database directory using UTL_FILE . i am able to generate the file but not aware how to copy or download the file to local using PL/SQL . i have done the same in forms using webutil pll.
I need a way to ftp file to remote server by reading data from table. I searched a couple of sites which asked me to use Chris xutl_ftp package..but unfortunately the site is no accessible..
Here is the code
CREATE OR REPLACE PACKAGE UTL_FTP AUTHID CURRENT_USER AS /** * LICENSE: GNU Lesser General Public License (LGPL) * Copyright (C) 2003-2006 Russ Johnson (john_2885@yahoo.com)
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true); END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN dbms_monitor.session_trace_disable(session_id=>122); END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
I am getting back into Oracle (from a long haul in MS only env.) and am now testing Oracle installs.I have been given a task of seeing the diff. between 12c and 10.2g...I set up 2 vms (excatly same configs) and used the same dmp file (on both env.) to restore data and settings for our jobs to run.We have some aggregated data, and cubes with DIM tables each being run on the vm machines. We run nightly jobs to rebuild our cubes.
I am supposed to see/analyze the value of 12c, and understand things might vary from company to company, but am perplexed at my result.12c is half the speed of 10.2g, both env. are the same out of the box with same dmp file and same hardware.
I am using the same dmp file, with the same jobs on each machine, with both vms having 10.2g or 12c installed out of the box as is.what default oracle settings might have changed from 10.2g to 12c that could make the exact same env. run twice as slow on the 12c?
Expectations were that out of the box with both machines running same jobs on same data (from dmp files) would have it that 10.2g would be slower than the 12c, except the 12c takes 2 times as long to run the jobs. I have reviewed every possibility as I know usually the problem is the person sitting in the chair and not the pc...but I confirmed all was identical from the one vm env. to the other, except the version of oracle out of the box.
What could be done to bring that default setting back to atleast equal time between the 2, that would give me a great starting point. Otherwise, I would have to toss this up to bloatware.
I read up a bit on the CBO, and know this might have changed in 12c.is there a way to bring it back to a backwards ealier config, so as to atleast match both env. execution plans?
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from client 24,1711,08745 6.5 Network
Db file sequential read130,939 4533 2.7 User I/O
Findings:- On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer? 2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
I'm looking for a solution to hyperlink to a local file (i.e. C:/test.txt). I want that the user clicks on a "browse file" button like the file browse item. after the user selected a file or just a folder the path should be shown in a display only item. this item should be clickable...
Is it possible to create a "Browse file" dialog? I know that to link to a file is possible with some HTML tags in the "pre element text" and "post element text" fields. something like "<a href="... but I don't know how this works.
I am facing a issue with one of my programming application. We need to transfer a file through pl/sql programming from a unix server to another destination(target) server.
How can we do it through pl/sql programming. I learnt that oracle streams can be used for this.
I do not know what all code is required for this. send me the code for a sample text file.
I also want to encrypt with a password and zip the file
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?