I am looking to get the maximum value for every 24 hour period for a month. So for example my date range can be defined by...
select to_date('&date','mm yyyy')-1 + level as DateRange
from dual
connect by level <= '&days'
...where I can provide the first date of the month and number of days in the month or a lesser value if less time is required. So, the results of the above query plus 24 for the range. I thought a some Googling would provide me what I needed, but my search came up empty.
I was hoping to do something like this...
select utctime, max(value) from table where utctime between.
i want to run the salary posting procedure automatically at mid night every day. i have created a job in toad and defined the required information but it is not working.
I need to sum the below daily total to weekly with the week starting day of Saturday Jan 3 2009 for the 52 weeks in 2009. The below query provides me the weekly total with the week starting day MONDAY. But I need the week starting day to be SATURDAY instead of MONDAY.
select to_char(report_date, 'YYYYIW'), sum(total) from report_table where to_number(to_char(report_date,'YYYYIW')) >= to_number(to_char(to_date( '&one_year_ago'),'YYYYIW')) group by to_char(report_date, 'YYYYIW')
Here is a query I used to generate the Daily Sample Data:
SELECT DISTINCT A.PRODUCT, TO_CHAR(B.BEGIN_DT,'YYYY-MM-DD') as post_date,'RCSL', A.DEPTID, A.ACCOUNT, SUM( A.POSTED_TOTAL_AMT) as Amount_Posted FROM A, B, C AND B.BEGIN_DT BETWEEN TO_DATE('&StartDate','YYYY-MM-DD') AND TO_DATE('&EndDate','YYYY-MM-DD') GROUP BY A.PRODUCT, TO_CHAR(B.BEGIN_DT,'YYYY-MM-DD'), A.DEPTID, A.ACCOUNT
For the below sample data I would need to add data starting 2010-01-02 - 2010-01-08 to get my weekly total.
How do i run three scripts one after another automatically for daily basis. Say i have three scripts A,B,C and i want to run the three scripts A followed by B followed by C.
I'm working on my test db and what I thought to be an easy task is turning out to be very difficult. I am trying to set up a cron job to email me daily the alert log to check for errors. I tested the script and it is not working this is what I have to email me the alert log
mailx -s "Alert Log for Testdb" oracleexample@gmail.com > /testapp/oracle/diag/rdbms/testdb/testdb/trace/alert_testdb.log
The Oracle machine A (11R1) is placed in San Jose. The Oracle machine B (10R2) is placed in Sydney.
I need to develop a daily replica from A to B. I don't need the entire db, but only some single schema.
The entire DB size is 1TB.The exp pump gzipped dump size will be 5+GB.Tnsping from A to B takes 500+ ms.The regular A exp pump -> ftp -> B imp pump will be too slowly because of the poor network.Which way it would be the best to implement it?
There are 2 databases, database A and database B. Database A is Oracle 11.2.0.2 which runs on linux and Database B is Oracle 11.2.0.2 which runs on windows xp machine. In database A, there are 100's of tables which are being updated every 10 minutes or 15 minutes. For reporting purpose, the developer wants to run report for the tables. But since database A is being updated every now and then, generating reports takes almost 15 to 20 minutes. So the reports can be generated in Database B. Once in a day the database B should have the updated data from database A so that the reports can be generated in database B with less time. What could be the best solution for the database B to have the updated data on daily basis from database A in oracle?
However, I need to refresh the group manually sometimes. Therefore, I cannot set the interval as sysdate+1.have tried setting the interval as follows. However, they are not correct
Quote:trunc(sysdate+1) +4/24 The next interval will show 9:25:20pm.
We have a 2-Node RAC11g R2.0.3 installed on Linux 5.5.
My problem to have a private connection disconnected on daily bases at 12:00 PM and 3:00 PM ONLY,and come back life within 2 minuets,I am using cross-over cable to connect those private interface , So in the ocssd.log Stating :-
2012-09-18 15:06:43.735: [ CSSD][1096161600]clssnmPollingThread: node racmain1 (1) at 50% heartbeat fatal, removal in 14.340 seconds 2012-09-18 15:06:50.752: [ CSSD][1096161600]clssnmPollingThread: node racmain1 (1) at 75% heartbeat fatal, removal in 7.330 seconds 2012-09-18 15:07:19.821: [ CSSD][1096161600]clssnmPollingThread: node racmain1 (1) at 90% heartbeat fatal, removal in 2.480 seconds, seedhbimpd 1
I need any good suggestion basically I have two different location
1- Factory 2- Head office
I have developed PRODUCTION module in factory and it is working fine and now I want to send data on daily basis to head office therefore I develop a form which will create backup in export format (.dmp) then user will send via email to head office.
Backup file should be save in pre-defined location then user will use a form which I developed for loading data into head office there are two different buttons in this form;
First is used to load data, actually first I load data in a temporary user which creates whenever user will press this button. Second is used to copy data in application user but first it checks if data exists then update otherwise insert.
I have to create a function. I need to find the max last logout date for each agent daily. For example, if an agent logged in for the first time at 9:00 and he logged out at 12:00 and he logged in again in 14:00 and he logged out at 15:00 the time I need my report to show is 15:00. How can I do that?In order to make it easiest for you to understand I am sending you this query:
select a.login as login2, To_Char(max(s.endtime), 'dd/MM/yyyy, HH24:MI:SS') as lastLogout from cti.agent a inner join cti.agentsessionlog s on s.agentid = a.agentid and To_Char(s.endtime) != '31-DEC-99 11.59.59.000000 PM' group by a.login;
This query returns the agent's login and the agent's last logout time. It works fine if I enter a date between but I cannot do that. If a use this query as it is and I try to export a report for 31/5 it shows as lastlogout the logout for 01/06 or 2/06. Is there a function I can use? I have a deadline.
I made small Inventory software for Medical store. Now I want daily base data in DMP file. How to make current date in DMP file don't need all.
I mean I have 30 tables in oracle sql . They are daily update with new entry and some table has date column and some not. Actually I want to send daily Data via mail.
Oracle on 11gR2 on hp-ux 11.23.Need to develop a shell script which will do a monitoring of the alert log daily.
It should do a audit trail (only check for warnings/errors) for the last 1 day daily till collection time and output to a file.if no errors should output "No Errors/Warnings", else ouput the relevant.
create directory asmexpdir as '+RECO/FILTDB/EXPDP'; grant read,write on directory asmexpdir to oraasfs; expdp oraasfs/oraasfs2301 directory=asmexpdir dumpfile=SBSR_EXP.dmp tables=TM_SFS_CUST_01 logfile=EXPDP_LOG:SBSR_EXP.log
SUCCESS MESSAGE
. . exported "ORAASFS"."TM_SFS_CUST_01" 387.2 MB 817684 rows Master table "ORAASFS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for ORAASFS.SYS_EXPORT_TABLE_01 is: +RECO/filtdb/expdp/sbsr_exp.dmp Job "ORAASFS"."SYS_EXPORT_TABLE_01" successfully completed at 03:34:59
And I like to run this daily and delete after 14 days. but it show error, what can be the solution to run this script?
#!/bin/bash #Script to Perform Datapump Export backup Every Day ################################################################ #Change History
I have inherited a query that union alls 2 select statements, I added a further field to one of the select statements ( a date field). However I need to add another dummy field to the 2nd select statement so the union query marries up I have tried to do this by simply adding a
select 'date_on' to add a field called date on populated by 'date_on' (the name of the column in the first query)
however when I run the union query i get the error Ora-01790 expression must have same datatype as corresponding expression.
I have a dynamic query stored in a function that returns a customized SQL statement depending on the environment it is running in. I would like to create a Materialized View that uses this dynamic query.
I have data in a table and another in XML file,I used SQL query to retrive the data placed on the table, and link this query with XML query that retrieves the data stored in the xml file. The data stored in the table and xml file sharing a key field, but the xml contents are less than what in the table.I want to show only the data shared between the two queries, how can I do that?
e.g.:
Table emp:
e_id | e_name | e_sal 023 | John | 6000 143 | Tom | 9000 876 | Chi | 4000 987 | Alen | 7800
I have the following four tables with the following structures Table A
ColA1 ColA2 ColA3 ColA4 ColA5 AA 100 CC DD EE
Table B
ColB1 ColB2 ColB3 ColB4 ColB5 AA 100 40452 A9 CDE
when these two tables were joined like the following:
Select colA1,ColA2, ColA3, ColA4, ColB3,ColB4, ColB5 from table A Left outer join (select ColB3, ColB4, ColB5 from table B where colB3 = (select max(colB3) from table B ) on (colA1 = colB1 and ColA2 = col B2)
I have a query that is pulling back more rows when I use the dblink than when I hit the linked database directly.
For example:
select x,y,z from mytable@dblink
returns 788,324 rows
while select x,y,z from mytable
returns 712,102 rows
It's the exact same query, with the only difference being the dblink. It's not pulling the data into a cursor or array, it's a simple, straightforward query on a remote database.
Is there a technique to getting a Top-N query to work as a sub-select in a larger query -or- is there another way to generate Top-N like results that works as a sub-select?
Background:
We have a large query that is being used to build an export from a legacy HR system to a new one. Amount the data needed in the export is the employees primary phone number.
The legacy HR system allows multiple phone numbers to be stored in a simple table structure:
SELECT emp_id, phone_type, phone_number FROM employee_phones
The new HR system does allow for multiple phone numbers, however they need a primary phone number identified and stored with the employee master information. (Subsequent phone numbers get stored in alternate table.)
From a business perspective, we have decided that if they have a HOME phone in the legacy system that should be the primary in the new system, if no HOME phone, then WORK, if no WORK then CELL.
That can be represented as:
SELECT * FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')
SELECT * FROM (SELECT * FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1
SELECT phone_number FROM (SELECT phone_number FROM employee_people_phones WHERE emp_id = '46021' ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1
phone_number ------------------- 1111111111
However, when the Top-N query is added as a sub-select in a larger query using the employee id from the larger query (WHERE emp_id = export.emp_id), it fails saying that �export.emp_id� is not a valid id.
(SELECT phone_number FROM (SELECT phone_number FROM employee_people_phones WHERE emp_id = export.emp_id ORDER BY decode(phone_type, 'HOME', 'a', 'WORK', 'b', 'CELL', 'c', 'z')) results WHERE ROWNUM = 1)
1.Any way around this? Is it possible to put a Top-N (with a WHERE clause using data from the main query) in a sub-select?
2.Any alternatives (other than Top-N) to delivering a ROWNUM=1 result with a �custom� ORDER BY statement?
Other Notes: Yes, we know we could do two queries in the data conversion first deliver the bulk data to the target table, and then update with the phone numbers. However, for multiple reasons, that is less than desirable.
I am having a Select query(below Query1) and I want to use one column(sum(col4)) from this Select query to be displayed in another Select query(Query 2). how to display this.
Query 1 :- select a.col1,a.col2,b.col3,sum(b.col4) from tab a, tab b where a.key1=b.key1 and a.key2=b.key2 group by a.col1,a.col2,b.col3
Query 2 :- select a.col1,a.col2,b.col3,sum(b.col6) from tab a, tab b where a.key1=b.key1 and a.key2=b.key2 group by a.col1,a.col2,b.col3,b.col5