SQL & PL/SQL :: Implement Word Wrap Feature In Code
Jan 24, 2012
I want to implement word wrap feature in code.
Requirement is format given multi line text in the specified lines and each line be of specified character. Here words should not be broken, instead they must come to new line if lines are available.
Function could of the sort:
CREATE OR REPLACE FUNCTION wrap_text(p_text VARCHAR2
,p_len NUMBER-- no of lines
,p_chr NUMBER--no of chr/line
)
RETURN VARCHAR2;
Is there a way we can wrap pl sql codes and other people will not be able to unwrap it? because I came across this site successfully unwrapped my wrapped pl sql codes. How to just unwrapped pl sql code then what is the use of wrapping?
I have no knowledge about Barcode. The problem is an issue of Loyalty Cards of a Hotel and Restaurant to various customers and then these cards will be presented by the customers time to time in the Hotel as well as Restaurant. The Owner of the Hotel and Restaurant wants to generate separate barcode for each card and when this card will be presented then the bar code reader will readout the code and the system will calculate the amount of discount/rebate. Because if the data entry operator enter the code of the card through key board the it will be a chance of leakage or misuse of that card.
I have to compare my SVN source code (packages, views etc) with the production code in the database like views etc (actually we are not sure that what we have in the svn is the final version of production code, we have objects created in the production database, but we don't have latest scripts for that. we have to deploy the svn code in the UNIX box).
So here the comparison is between the OS files and the database objects.
I thought I would get scripts of all the packages, views etc from the production database by using DBMS_METADATA or some utility and save the code in OS files then compare one svn file with OS file manually by using some comparison tools e.g toad provide one comparison tool.
I downloaded oracle sql developer, i type my code into a worksheet but if i use the run statement option, it asks me to make a connection. I dont want to make a connection, just test the data locally.However, even if I do try and make a connection, i get ora-12560 error (local connection).
I just want to type up some data to make some table and test to retrieve or manipulate the data. I'll use any program, command line or gui.
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production.
Using Oracle 11g's compression feature in production? I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check to see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.
I used the Exchange Partition feature to swap segments between 2 tables- one Partitioned, and one Non-Partitioned. The exchange went well. However, all the data in the partitioned table has gone to the partition which stores the maxbound values.
/** actual table names changed due to client confidentiality issues */
-- Drop the 2 intermediate tables if they already exist
drop table ordered_inv_bkp cascade constraints ; drop table ordered_inv_t cascade constraints ; /**
1st create a Non-Partitioned Table from ORDERED_INV and then add the primary key and unique index(s):
*/ create table ordered_inv_bkp as select * from ordered_inv ; alter table ordered_inv_bkp add constraint ordinvb_pk primary key (ordinv_id) ; -- create unique index ordinv_scinv_uix on ordered_inv_bkp( SCP_ID ASC,
[code]....
-- Next, we have to create a partitioned table ORDERED_INV_T with a similar
-- structure as ORDERED_INV.
-- This is a bit tricky, and involves a pl/sql code
-- Add section to set default values for the intermediate table OL_ORDERED_INV_T
FOR crec_cols IN ( SELECT u.column_name ,u.nullable, u.data_default,u.table_name FROM USER_TAB_COLUMNS u WHERE u.table_name ='ORDERED_INV' AND u.data_default IS NOT NULL ) LOOP
[code]....
-- Next, use exchange partition for actual swipe
-- Between ordered_inv_t and ordered_inv_bkp
-- Analyze both tables : ordered_inv_t and ordered_inv_bkp
BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME => 'ORDERED_INV_T'); DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME =>'ORDERED_INV_BKP'); END; / SET TIMING ON;
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production PL/SQL Release 9.2.0.1.0 - Production CORE 9.2.0.1.0 Production TNS for Solaris: Version 9.2.0.1.0 - Production NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
We have 10g physical standby set in our environment and we are migrating to 11g now. We want to use the active data guard feature of 11g to run the live reports on standby rather production. Questions I have is:
1) On our current 10g standby environment, we use db_name=cusms which is exactly matching with the production database name. I don't see we are using database_unique name on our standby. But I have read several blogs where everyone talks about using db_unique name on standby and db_name can be exactly matching with production on 11g. I wanted to know, is db_unique name a new requirement to have on 11g? can I go ahead and not use db_unique name and just have db_name exactly matching with production? What are the implications of doing so? The reason we want to stick to this is in-case of failover we want the database name to be the same. But I want to hear your thoughts on this:
2)While building standby, I did noticed few things and want your clarifications:
a.On standby database, should I mount instance using pfile or spfile or it doesn't matter? b. Lets say if I use either spfile or pfile, can I just have db_unique name in that file and just start the instance in no mount and do the duplicate from rman? c.As soon as my duplicate target database for standby from active database got finished, I usually exit the rman session and go to sqlplus and shutdown the standby instance. (Is this ok to do) d.Then I start the standby instance with startup (mount and open the database) this should open the standby database in read only mode. Following I issue alter database recover managed standby database using current logfile disconnect to put the database in recovery mode. (any steps missing here) e.Then go to primary and do few log switches and come back to standby to see if the primary changes moved to secondary or not.
But what I have observed is:
a. When I do the duplicate it runs successful. But during the course of duplicate, primary system generates few archives which are not shipped or applied on standby. When I go to standby to recover the database, it says media recovery needed and ask for archives files. I need to manually move this files from primary to standby to apply. Isn/t this automatically taken care? b. I also noticed after I can not open the standby database in read only mode after the duplicate command. While trying to open, it says database media recovery needed. What's the best procedure to open the database in read only mode immediately? c. On my standby init.ora lets say if I use db_unique name, where would my control file be place? Will oracle create controlfile from primary and put it on my standby database and put an update an entry into my pfile or spfile?
what is the best practice to implement in Indexing,is it global indexing or local indexing, I would like implement one of them in object that has been partitioned horizontally.i dont know exactly what to make of it.
We have an Implementation of Non-RAC (Single Instance with Existing ASM-RAC as storage) and below is the Details,
The client have a Real Application Cluster configuration on their AIX Server from there Data Center and they want to implement a Single instance Database that will used ASM as Storage and the storage or Disk that they want to use is the same Disk or Mirror copy of the Disk from their RAC Database.
Scenario: -The AIX Server that they have is a one-way Hardware Mirroring (PPRC) only and it is not designed to run a 24/7 activity. -DATAGUARD is not an option.
SQL> CREATE OR REPLACE TRIGGER TRI_ABOVE_JOINTBOX 2 BEFORE UPDATE ON JOINT_BOX FOR EACH ROW 3 DECLARE 4 PRAGMA autonomous_transaction; 5 BEGIN 6 IF (:NEW.SCALE_X <> :OLD.SCALE_X) OR 7 (:NEW.SCALE_Y <> :OLD.SCALE_Y) THEN
[code]....
The above code is for the GIS project. When I am trying to implement the above trigger it is giving output in such a way that the joint box(which is point feature in the designed database) scale is fixed to 1 as written in the code,but it cannot be moved in the DGN(front end),this is because trigger is fired before update.
Actual intention is that the feature(joint box) need to move in the DGN then the trigger need to be fired so that then scale need to fixed to one even after changing.For that I implemented after update trigger in the above code,but then it is throwing error as
ORA-04084: cannot change NEW values for this trigger type. I guess this is because after update trigger cannot be implemented for bind variables old and new.
1.joint box can move in DGN(this can be acheived automatically if after implementing after update trigger).
2.after dragging in the DGN the scale to be fixed as 1.
- 1 - M or P: indicates which table to insert: MASTER or PARENT; - 2 - M or A or B: indicates MASTER, PARENT_A, PARENT_B; - 3:18 - DATA.
Based on the values above, what I need to do is:
1. Load a line to MASTER_TABLE; 2. Load a line to PARENT_TABLE_A pointing to its relative line in MASTER_TABLE; 3. Load a line to PARENT_TABLE_B pointing to its relative line in MASTER_TABLE; 4. In the original file line, there is nothing I can use to join a MASTER line with a PARENT line.
The result would be: MASTER_ID PARENT_DATA 1 PARENT_TABLE_A1 1 PARENT_TABLE_B1 2 PARENT_TABLE_A2 2 PARENT_TABLE_B2
I tried to use both: SEQUENCE and Sequence.NextVall (CurrVal) but they only work when using ROWS=1 and the file I need to load has millions of rows, so I need direct path loading.Also, I read about External Table, but it does not suit my needs because the Application server is not the same as Database server, which is needed by external tables.
in this case is better load the data to a temporary table and then insert to the other tables, I found almost the same question in the topic pointed by the link below: URL....
I have tried to implement RLS policy of oracle.I have two Schema X1 & X1_DBA.
I have created the emp table in X1_DBA create table emp(empid number,ename varchar2(10),deptno number) and inserted some rows into the Table. i have created the below function in X1_DBA schema & Given Select Privilege to X1.
CREATE OR REPLACE FUNCTION no_dept10( p_schema IN VARCHAR2, p_object IN VARCHAR2) RETURN VARCHAR2
[code]...
When i Add the Policy in X1_DBA.schema i am getting the Error as Table does not exist
Is there way implement Kerberos authentication protocol with PLSQL? I am consuming web service with utl_http, which implement only basic authentication and I was able to find implementation with PLSQL for NTLM. So I am wondering if there is a kerberos implementation.