TimesTen In-Memory :: Manual Refresh Cache Groups From Oracle DB?
May 3, 2013
we want to truncate a oracle Table in the Oracle DB. After the truncate the fact table will be loaded again. After the new load in the fact table we want to tell the times ten db to refresh the cache table. The cache Table is a user owned read-only cache group with no autorefresh. We want to tell times ten in a PL/SQL Block from Oracle DB that starts the refresh from the cache group in times ten. The refresh should not be a autorefresh because the refresh should only start if the fact table will new loaded after the truncate.
Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing? Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
want to know the internal functional flow of timesten when any command is fired against it.
We are inserting data using JDBC (Java program) in Oracle 11gR2 DB and Timesten to oracle (using AWT cache group) . In reality insertion in oracle is faster than Timesten Cache DB.
Timesten version - TimesTen Release 11.2.1.7.0 (64 bit Linux/x86_64)
1. It is Client/Server Model. 2. Cpu has 4 core and we are using 3 core for insert the Data. 3. Perm and Temp size is big enough compare to Data Size 4. auto commit=0 5. durable commit=0 6. PassThrough=1 7. LogBufParallelism=3 8. LogPurge=1 9. LockWait = 0.1
I have 5 MViews that I want to refresh in two occasions, every Sundays and at the 1st of the month. I created a Refresh Group for the weekly and that work fine. But when I tried to created the second Refresh Group for the monthly I get a "materialized view is already in a refresh group".
You can only have a materialized view in one refresh group? What options to I have to refresh it in different intervals?
We recently migrated a database from 9i to 10g (overdue we know!!) and discovered that dbms_mview.refresh default behavior was turned upside down - meaning that 10g didn't first truncate the MV to refresh it. We're trying to unwind a lot of legacy issues, but it also turns out that we also have 100 REFRESH GROUPs and 100 MATERIALIZED VIEWs. That means a 1 to 1 relationship between RGs vs MVs. There is one MV defined to each RG.
These are my questions:
1) Does a 1 to 1 relationship between RGs and MVs make sense to anybody? The original implementors are gone and we can't fathom the reason for this.
2) Is there any reason why I shouldn't convert these 100 groups to plain and simple 100 MVs? I don't want the delete/insert refresh behavior of dbms_refresh.refresh and I do want the truncate behavior of dbms_mview.refresh ATOMIC=FALSE for refreshing a standard MVIEW
As per my understanding , Timesten or IMDB Cache can be connected through DSN by any external client. Want to know whether Radware can be integrated with Timesten or IMDB Cache.
I wonder if this JDBC Connection is a TCP/IP connection on the backend? If this is not a TCP/IP connection, how can I make a connection with TimesTen via TCP/IP protocol?
I'm using TimesTen Release 11.2.1.9.8 (64 bit Linux/x86_64) 1. is there any limit in size for a single table. How much a table size can be increased? 2. Is there any limit in number of records in a table?
,Im having following replication scheme: M1,M2 and M3 are multimaster replicated datastores.These three datastores will replicate its data to a node which is going to act as a Propagator.The Propagator will then replicate data to a set of subscribers. Question:-------------
Can we configure the Propagator to be redundant.ie, can we configure an additional propagator which will act as redundant and replicate to the same set of subscribers?
We cache oracle 11g2 data in timesten 11.2.2.5.0 ( IMDB ) We can't find any trigonometric functions like cos(x), sin(x) or so in timesten. Does that means timesten doesn't support trigonometric functions?
The document says "Propagators are also useful for distributing replication loads in configurations that involve a master database that must replicate to a large number of subscribers".
Link [URL]........
My question is how do we define this " large number ". Is 5 a large number or 10 a large number ? I've a bidirectional legacy replication scheme wherein a node replicates to 10 other nodes. Should i introduce Propagator between these nodes.
We have been using XLA to capture events on TimesTen for a while now without any issues. We were on TimesTen 7.0.5 and then moved to 11.2.2.3.0 and now 11.2.2.4.0. The XLA processes used Java 6 64 bit and works well with TimesTen versions 11.2.2.4.0 - 64 bit and 11.2.2.3.0 - 64 bit, without any issues.
However, we recently upgraded to Java 7 64 bit both during build time of our XLA processes as well during runtime. The problem we see now is that XLA process upon startup processes events for a while and then after that doesnt receive any events. The process doesn't throw any errors/exceptions. If we restart the XLA process, all the unprocessed events are received and then it behaves normally for a while after which it fails to receive any further events.
Is there any issue with Java 7 64 bit and the TimesTen XLA API? I read the TimesTen 11.2.2.4.0 manual and it says that Java 7 and the ttjdbc7.jar have been certified to work well. Was just wondering if there were any other issues.
In addition, to debug the problem, we ran XLA on a single node and on a two safe setup. The same observations are noted. On Java 6, the process runs just fine capturing all events without any issues, but with Java 7, the issues persist.
In addition, we are only performing updates/inserts on to TimesTen, no delete actions. The OS on which TimesTen's and XLA runs is Red Hat Enterprise Linux Server release 5.7 (Tikanga).
I would like to have a query which should fetch previous day records from column which is having timestamp data type.
select mdn from user_table where updatetimestamp > trunc(sysdate) - INTErVAL '24' HOUR;
But this gives output not for previous day, but all records which are 24 hrs less than current day. How to get records for previous day based on column having timestamp data type.
We are trying to execute a statement SELECT CURRENT_DATE FROM DUAL on Timesten 11.2.2 . It throws error unknown referenced column error. Command> select current_date from dual; 2211:
Referenced column CURRENT_DATE not foundThe command failed. But the following doc shows the support.
TimesTen PL/SQL Support: Reference Summary CURRENT_DATE function
Returns the current date in the session time zone. YIn TimesTen this returns the current date in UTC (universal time). TimesTen does not support local time zones.
I want increase speed of importing data using ttisql. My script contains about 12k simular MERGEs. Can I prepare this state,ent once and later substitute params from script?
For applications and Timesten databases on the same server we can use direct model to gain the base performance. But I want to know that how big heap size of JVM to be set for my java application when enabling direct model?
Does my application need more head memory when direct model than other local communication protocols, such as Unix domain socket or IPC? Supposing my Timesten database takes 12GB memory from OS, does it mean I need specify the same size for JVM heap(-Xmx12G)?
I installed timesten server and client on different machines in LAN, but with the same user and group - ttadmin:ttgroup When I tried to connect to server with ttisqlcs -connStr "DSN=sampleCS"The output gave me error messages as below connect "DSN=sampleCS";S1000: Failed to retrieve IP address of the system. System error: -2The command failed.Done. The related part in sys.ttconnect.ini is set as below
sys.odbc.ini [ODBC Data Sources]sampleCS=TimesTen 11.2.2 Client Driver [sampleCS]TTC_SERVER=192.168.0.206TTC_SERVER_DSN=nredb_ds
What does it mean by "retrieve IP" ?Does it try to resolve IP by the hostname even when I already gave it the ip address?By the way I can ping 192.168.0.206 and telnet at port 53397 with no problem.
try to evaluate it as IMDB cache.I am facing this error repeatedly. I have increased perm size from 6.25 GB to 10 GB.After inserting about 460.000 rows I get the error again. Is it possible that 460.000 rows need 3.75 GB space?
In Oracle database these rows occupy about 200 MB space.
I have a problem with executing oracleCommand.ExecuteReader() method. Whatever I try it always returns null and it won't create OracleData reader object. I'm using ODAC 1120320_x64 for .net 4.0 and timesten112241.win64. Don't sure what to do. Debugger is showing strange thing in OracleConnection object : ConnectionState = Closed, but output of ttStatus shows connection to TimesTen data store and ExecuteNonQuery() command works just fine with or without (in or out) parameters. But when I try to execute some query with multile output such as select *, I can't get any result.
I also have a strange problem with connection.Open() When I execute Open() i throws AccessViolationException that can be handled with [Handle ProcessCorruptedStateExceptions] attribute, but connection is established after that and my application works fine until I try to instance OracleDataReader object.
Here is the code: OracleCommand select = null; OracleDataReader reader = null;
select = new OracleCommand(selectStmt, connection); select.CommandType = CommandType.Text; try { reader = select.ExecuteReader(); // this line throws NullReferenceException if (reader.HasRows) { [code]....
Just to mention, I tried it with different queries (pl/sql, plane sql, stored procedure) and all of them works fine in SQL Developer, but not in app.
i want to create a group xyz and add some users to xyz group and want to grant/revoke permissions to xyz. So that all the users present in that group will have the same permissions as of the xyz group. so that instead of giving the permissions to users individually i can give it at a time.
I am using 10.2.4.0 of oracle. I am having one requirement, in which i have to divide the set of records into certain groups , so that they can be executed partly but not in one run.
So in the 'SELECT' clause itself i want to assigns particular value (may be 1 )to first 50000 records then another value(may be 2) to next 10000, like wise. And again the total count of records will also varry time to time , if the total count of record set is less than 10000 , then it should only assign '1' to all the records. i will set the group values (1,2,3...) as another column itself.
I've a situation where I've very less redo logs generated. Let us say 10MB. Which solution will be better ?
1. Create one redo log group about 12 MB in size. 2. Create two redo log groups about 5 MB each in size as recommended by Oracle.
Even though solution 1 is also appropriate for me because I've less redo generated than the redo log group size. My whole redo will fit in this and I can raise checkpoint forcefully after certain period of time let us say every 3 seconds.
In one of our DB I found scenario one is implemented. So I want to know pros and cons of both of these practices.
Phyical memory : 420G My database version : 11.2.0.3 running on linux machine.
Memory_target = 200G . I would like to allocate this value to following SGA components. I don't want to automatic memory management enabled. how to split 200G for following components. Is there any percentage for each components ?
In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)* FROM HR.Employees_copy WHERE department_id = 20 GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY) ------------------------------------------------------------------------------------------------------- 202 Pat Fay 6000 201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan ---------------------------------------------------------- Plan hash value: 3837552314 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 | | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | | | 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 | |* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 0 consistent gets 0 physical reads 0 redo size *690* bytes sent via SQL*Net to client 416 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 2 rows processed
STEP - 2
INSERT INTO HR.employees_copy VALUES(200, 'Dummy', 'User','Dummy.User@email.com',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary) FROM HR.Employees_copy WHERE department_id = 20 GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY) -------------------------------------------------------------------------------------------------- 202 Pat Fay 6000 201 Michael Hartstein 13000 200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan ---------------------------------------------------------- Plan hash value: 3837552314
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size *714* bytes sent via SQL*Net to client 416 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?