I'm supposed to create a database for an application. The server where this will be running previously had a database for a pretty similar app. I don't know much about Oracle so I reused the ZFS filesystems and left them how they were created (because honestly, I didn't knew why they were created that way, but I'm pretty sure it was for a good reason).
Since I reused the filesystems I created my database and placed the controlfiles in the same places where the old database files were (/oradata/SMART/ora1,/oradata/SMART/ora2,/oradata/SMART/ora3). Thinking like MySQL works I created CODEapp/oradata_smart_ora4 60.6K 400G 60.6K /oradata/SMART/ora4 specifically to store the database there.
The databases startups and mounts no problem. Note that this server will manage with millions/billions records throughout its lifetime.
1. Now that my Database is created, whats the next step? Create the Schema or Tablespace?
2. Tablespace Questions: Tablespace datafile(s) is where actual data from tables is stored? how many are needed? Default or Temporary? How much space will I need for it? Autoextend?
I am trying to restore to a backup instance on a backup server. When I try to recreate the tables I keep getting ORA-01659: unable to allocate MINEXTENTS. The tablespaces and datafiles on both servers show as the same size in OEM.
I have dropped all tables and OEM shows tablespaces are empty. Then I run a script to recreate all tables. Most of the tables don't get created because their TS is full. After the script to recreate all tables runs, the main tablespaces are full, more full than on the production machine. I have also tried ALTER TABLESPACE xxx COALESCE; on each tablespace right after dropping all tables and before recreating them to reclaim free space. Why is it full? I've only dropped and created the tables, there shouldn't be any data in them yet.
ORA-01659: unable to allocate MINEXTENTS beyond 2 in tablespace PLUS_T...The backup instance was already there, all I did was drop the tables. Here's what I ran on prod to build a script to recreate the tables on backup server. Got it off Burleson somewhere.
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name) ||'; ' FROM USER_TABLES u;
We have a vendor-supplied 11g database where records are split between two schemas -- an ACTIVE schema and an ARCHIVE schema. Each object has both a corresponding object in both of the ACTIVE and ARCHIVE schemas.
The vendor also has a third schema where each object is merely a UNION ALL of the associated ACTIVE and ARCHIVE schema objects. For the sake of example, I'll call that schema COMBO.Over the years, we've created queries and reports that reference both the COMBO and ARCHIVE schemas and that has worked just fine.
The vendor has now set up a secondary database for us that we can use when the primary database is offline for patching/upgrades/etc. The trouble is, this secondary database only has the ACTIVE schema and records. The vendor will not be writing any ARCHIVE records to it.
Primary DB: ACTIVE, ARCHIVE, and COMBO schemas Secondary DB: only the ACTIVE schema.
is there a way to set up the missing ARCHIVE and COMBO schemas on the secondary DB such that we won't have to rewrite our SQL to accomodate the lack of an ARCHIVE schema when we move reports over to the backup database?
Of course, no records would need to be returned from the virtual ARCHIVE schema, but I'd love for the untouched SQL to run without error.
I need to create a structure DATABASE=>SCHEMA=>TABLE as
DB=>SC=>EMPLOYEE ...but after connecting database i could create table only user my user schema(own schema)only . I want to create a new schema called SC as public and need to create a table .
I have SQL Server database I would like to migrate into Oracle. The database supports a large application. It is around 10GB. I requested a new instance but was advised I would have to pay for that but if I asked for a new Schema it could go in our current company instance. I am fine with that since it wont cost more money if I just add a new schema to our Company Oracle instance. Just curious what is the advantage of getting a new instance compared to creating a new schema for 10 GB of data?I assume the advantage of creating a new instance is our Schema (in new instance) and work will have its own space/house and can grow in size without any issues?
I need to create a shell script to find the free space of an auto extensible tablespaces and send an alert when the free space is < 700MB. I tried checking with the dba_free_space, but I did not get the exact free space.
Tell me the logic to find the exact free space of autoextensible enabled tablespaces?
I have a list of materialized views in schema A. I want to create a refresh group and then refresh it from Schema B (Dynamically-run time based on some criteria). What Grants are necessary on schema B in order for it to be able to create and refresh the groups on Materialized views in Schema A.
I know that one of the Options is to, GRANT ALTER ANY MATERIALIZED VIEW as a SYS user. But I do not have any SYS privileges.
I am creating a table from another existing table in another schema. The existing table contains data. When I am using the query- create table m _voucher as select * from ipm.m_voucher,I am getting the whole data of m_voucher but I want empty m_voucher table, so what will be the query to get the empty m_voucher table?
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
at my Oracle 11gR2 (11.2.0.3.0) Instance i have two tablespaces that i want to "bundle" into only one tablespace. Herre is the problem, that some of the tables in the two tbalespaces has the same title but some rows of the tables could be not the same.
Is it possible with a kind of migration assistent to migrate two tablespaces into one in that way, that theassistent only writes that rows into the new tablespace, that are not in the tablespace at the moment.
Another way i was thinking about is to have an insert statement coupled with a select statement. The select statement selects all the rows, that are not in the table where i want to migrate in and the insert statement put that selectet rows into the new table.
I've got one database which was Initially upgraded from Oracle 8i to 10.2.04 running on windows. Most of the tabespaces are Dictionary managed. Do you think moving them to locally managed tablespace would give me better performance?
if Yes, what approach I should apply to move them to locally managed? I would like to do this with minimum/no downtime.
I have a Physical Standby database on server#2 which has its UNDO and TEMP tablespaces in a mount point that is a bit small. I discovered yesterday that this mount point was completely filled. This is not an immediate problem, because even though the TEMP and UNDO tablespaces are large, there is almost no usage of them.
I logged into my Primary, and created a new temp tablespace (TEMP2), made it the database default, dropped the TEMP tablespace, and then re-created the TEMP tablespace in a different mount point (with a much higher capacity!). I then made the new TEMP as the database default temp tablespace again. I expected that these changes would be propagated over to the Physical Standby via log application, but they didn't come over.
I have standby_file_management set to AUTO, and there are no unapplied logs. Everything OTHER than the temp tablespaces seems to be in synch.
CODEPARTITION t1p1 VALUES LESS THAN (TO_DATE('2011-11-01', 'YYYY-MM-DD')) PARTITION t1p2 VALUES LESS THAN (TO_DATE('2011-11-02', 'YYYY-MM-DD')) .... PARTITION t1p4 VALUES LESS THAN (MAXVALUE)
Every year partitions will be added for next 12 month. The table partition will be dropped every month (I have to have data from last six month so in July I could drop partition t1p1, in August - t1p2....). How many tablespaces should I create for this table and how place partitions in them to have data for last six month and use minimum space on disk?
I was thinking about one tablespace for whole table because space of each dropped partition will be reused, what do you think about that?
Is the tablespace actually off-line when doing a user-managed hot backup? I know the data blocks are copied into a redo stream but I am not sure if that means it(tablespace) is actually on-line or off-line.
I just want to know what are precautionary measures if tablespaces in a database is in autoextend mode. I'm wondering if these tablespaces reached its maximum sizes.
In our case, we are administering a database (turned over by our outsourcer after a 2-year maintenance) with SAP interface, and we noticed that most of it's tablespaces were created with initial size of 2Gb up to a maximum size of 10Gb, all were 'autoextensible'.
I am using EXPDP to export a schema (Oracle 11g R2), and I need to exclude all the tablespaces that the schema is using. I have seen exluding Oracle objects like functions, tables, packages, indexes...etc. But I have not seen excluding tablespaces.Iis it possible to exclude tablespaces while creating the export dump?
I have a 10G Express system running. I Have 2 tablespaces in production. WHen taking backup, it terminates unsuccessfully saying system01.dbf is damaged. The application works fine and no data loss is found through the application interface.
So can I shift the data to a new server using the dbf files of the tablespaces in use?
I am trying to find the space occupied on disk by the tablespaces of the database that contain tables, some (and not all) of whose columns are encrypted. My query is like this:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB from dba_data_files a, dba_tables b, (select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c where a.tablespace_name = b.tablespace_name and b.owner = c.owner and b.table_name = c.table_name order by a.tablespace_name;
The output of the query is as shown in the attached file:
Since the output (under the heading Total Size of the tablespace) is probably the sum of all the datafiles returned by the query and is obviously incorrect, I have not given the rest of it. I also tried the following:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB, sum (bytes/(1024*1024*1024))over (partition by a.tablespace_name order by file_name) "Total Size of the tablespace" from dba_data_files a, dba_tables b, (select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c where a.tablespace_name = b.tablespace_name and b.owner = c.owner and b.table_name = c.table_name order by a.tablespace_name ; [code]...
Here, the fig. under the heading "Total Size of the tablespace" are probably the sum of all the records returned by the query if distinct is not used i.e all the data file sizes returned by the query.
tune my query and get the desired results? I think this can be achieved by group by with rollup, cube, order by and grouping functions, but am not sure how to proceed. I know that I can get the results by using Enterprise Mgr. Console in 2 mins., but would still like to get the results with the queries.
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
We have an application with many separate databases (one per customer). Given they share the same business requirements (service hours, change mgmt etc), we're interested in potentially consolidating the separate DBs (which are relatively small) into separate schemas within a fewer no of databases to reduce the overhead.
Our issue is that the application is hard-coded to use a specific administrator and application connection user name. Changing this is unfortunately not an option.
Given this limitation, is there any possibility to map a generic user into a customer-specific schema based on the database service that they connect to? Each customer connects to different database services but may use the same user name. We considered using private synonyms but this seems to acheive the opposite (i.e. many different users could connect and map to a single users schema). One thing to point out is that where there is a single user name, it is acceptable for a single password to be used across the different customer DBs as they will be a single admin/user.