Basic Concepts of Amazon S3

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
Access control defines who can access objects and buckets within Amazon S3, and the type of access (e.g., READ and WRITE). The authentication process verifies the identity of a user who is trying to access Amazon Web Services (AWS)

Advantages to Amazon S3:

Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness. Following are some of advantages of the Amazon S3 service:
·         Create Buckets – Create and name a bucket that stores data. Buckets are the fundamental container in Amazon S3 for data storage.
·         Store data in Buckets – Store an infinite amount of data in a bucket. Upload as many objects as you like into an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved using a unique developer-assigned key.
·         Download data – Download your data or enable others to do so. Download your data any time you like or allow others to do the same.
·         Permissions – Grant or deny access to others who want to upload or download data into your Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication mechanisms can help keep data secure from unauthorized access.
·         Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.
Note: SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3 features will not be supported for SOAP. We recommend that you use either the REST API or the AWS SDKs.

key concepts and terminology:

Buckets: A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For example, if the object named photos/jigi.jpg is stored in the nitanshi bucket, then it is addressable using the URL http://nitanshi.s3.amazonaws.com/photos/jigi.jpg


Buckets serve several purposes: they organize the Amazon S3 namespace at the highest level, they identify the account responsible for storage and data transfer charges, they play a role in access control, and they serve as the unit of aggregation for usage reporting. You can configure buckets so that they are created in a specific region.
You can also configure a bucket so that every time an object is added to it, Amazon S3 generates a unique version ID and assigns it to the object. 

Objects

Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe the object. These include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time the object is stored. An object is uniquely identified within a bucket by a key (name) and a version ID.

Keys

A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. Because the combination of a bucket, key, and version ID uniquely identify each object, Amazon S3 can be thought of as a basic data map between "bucket + key + version" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version.

Regions

You can choose the geographical region where Amazon S3 will store the buckets you create. You might choose a region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in a region never leave the region unless you explicitly transfer them to another region. For example, objects stored in the EU (Ireland) region never leave it.

Amazon S3 Data Consistency Model

Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat. The caveat is that if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.
Updates to a single key are atomic. For example, if you PUT to an existing key, a subsequent read might return the old data or the updated data, but it will never write corrupted or partial data.
Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data centers. If a PUT request is successful, your data is safely stored. However, information about the changes must replicate across Amazon S3, which can take some time, and so you might observe the following behaviors:
  • A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
  • A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
  • A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.
  • A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.
Note:1. Amazon S3 does not currently support object locking. If two PUT requests are simultaneously made to the same key, the request with the latest time stamp wins. If this is an issue, you will need to build an object-locking mechanism into your application.
2.    Updates are key-based; there is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application.

Amazon S3 Features:

Storage Classes:Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3 STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-lived, but less frequently accessed data, and GLACIER for long-term archive.


Bucket Policies: Bucket policies provide centralized access control to buckets and objects based on a variety of conditions, including Amazon S3 operations, requesters, resources, and aspects of the request (e.g., IP address). The policies are expressed in our access policy language and enable centralized management of permissions. The permissions attached to a bucket apply to all of the objects in that bucket.

Individuals as well as companies can use bucket policies. When companies register with Amazon S3 they create an account. Thereafter, the company becomes synonymous with the account. Accounts are financially responsible for the Amazon resources they (and their employees) create. Accounts have the power to grant bucket policy permissions and assign employees permissions based on a variety of conditions. For example, an account could create a policy that gives a user write access:
  • To a particular S3 bucket
  • From an account's corporate network
  • During business hours
An account can grant one user limited read and write access, but allow another to create and delete buckets as well. An account could allow several field offices to store their daily reports in a single bucket, allowing each office to write only to a certain set of names (e.g., "Noida/*" or "Kolka/*") and only from the office's IP address range.
Unlike access control lists (described below), which can add (grant) permissions only on individual objects, policies can either add or deny permissions across all (or a subset) of objects within a bucket. With one request an account can set the permissions of any number of objects in a bucket. An account can use wildcards (similar to regular expression operators) on Amazon resource names (ARNs) and other values, so that an account can control access to groups of objects that begin with a common prefix or end with a given extension such as .html.
Only the bucket owner is allowed to associate a policy with a bucket. Policies, written in the access policy language, allow or deny requests based on:
  • Amazon S3 bucket operations , and object operations.
  • Requester
  • Conditions specified in the policy
An account can control access based on specific Amazon S3 operations, such as GetObjectGetObjectVersionDeleteObject, or DeleteBucket.
The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents, HTTP referrer and transports (HTTP and HTTPS).

AWS Identity and Access Management

You can use IAM with Amazon S3 to control the type of access a user or group of users has to specific parts of an Amazon S3 bucket your AWS account owns.
Common Operations:
  • Create a Bucket – Create and name your own bucket in which to store your objects.
  • Write an Object – Store data by creating or overwriting an object. When you write an object, you specify a unique key in the namespace of your bucket. This is also a good time to specify any access control you want on the object.
  • Read an Object – Read data back. You can download the data via HTTP or BitTorrent.
  • Deleting an Object – Delete some of your data.
  • Listing Keys – List the keys contained in one of your buckets. You can filter the key list based on a prefix.

Amazon S3 Application Programming Interfaces (API)

The Amazon S3 architecture is designed to be programming language-neutral, using amazon  supported interfaces to store and retrieve objects.
Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.

The REST Interface

The REST API is an HTTP interface to Amazon S3. Using REST, you use standard HTTP requests to create, fetch, and delete buckets and objects.
You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch objects, as long as they are anonymously readable.
The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits work as expected. In some areas, we have added functionality to HTTP (for example, we added headers to support access control). In these cases, we have done our best to add the new functionality in a way that matched the style of standard HTTP usage.

The SOAP Interface

SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3 features will not be supported for SOAP. We recommend that you use either the REST API or the AWS SDKs.
The SOAP API provides a SOAP 1.1 interface using document literal encoding.
e a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that uses the bindings to call Amazon S3.

Paying for Amazon S3

Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your application. Most storage providers force you to purchase a predetermined amount of storage and network transfer capacity: If you exceed that capacity, your service is shut off or you are charged high overage fees. If you do not exceed that capacity, you pay as though you used it all.
Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges. This gives developers a variable-cost service that can grow with their business while enjoying the cost advantages of Amazon's infrastructure.
Before storing anything in Amazon S3, you need to register with the service and provide a payment instrument that will be charged at the end of each month. There are no set-up fees to begin using the service. At the end of the month, your payment instrument is automatically charged for that month's usage.

Step by Step Migration SQL Server database from Windows to Linux


SQL Server's backup and restore feature is the recommended way to migrate a database from SQL Server on Windows to SQL Server 2017 on Linux.
Prerequisites:
The following prerequisites are required to complete the migration.
Windows machine with the following:
  • SQL Server installed.
  • SQL Server Management Studio installed.
Target database to migrate.
  • Linux machine with the following installed:
  • SQL Server 2017 (RHEL, SLES, or Ubuntu) with command-line tools.
Create a backup on Windows Box:
There are several ways to create a backup file of a database on Windows. The following steps use SQL Server Management Studio (SSMS).
  • Start SQL Server Management Studio on your Windows machine.
  • In the connection dialog, enter localhost.
  • In Object Explorer, expand Databases.
  • Right-click your target database, select Tasks, and then click Back Up....
  • In the Backup Up Database dialog, verify that Backup type is Full and Back up to is Disk. Note name and location of the file. For example, a database named DB on SQL Server 2016 has a default backup path of.
C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\DB.bak
  • Click OK to back up your database.
Another option is to run a Transact-SQL query to create the backup file. The following Transact-SQL command performs the same actions as the previous steps for a database called DB:
BACKUP DATABASE [DB] TO  DISK =
N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\DB.bak'
WITH NOFORMAT, NOINIT, NAME = N'DB-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO

Copy the backup file to Linux:

To restore the database, you must first transfer the backup file from the Windows machine to the target Linux machine. There are several way to move the file to Linux from

  • Use Bash shell (terminal window) running on Windows.(For this need to install a Bash shell on your Windows machine that supports the scp (secure copy) and ssh(remote login) commands.
  • Then Open a Bash session on Windows. In your Bash session, navigate to the directory containing your backup file. 

cd 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\'

Then Use the scp command to transfer the file to the target Linux machine. The following example transfers DB.bak to the home directory of user1 on the Linux server with an IP address of 192.168.0.0:

scp DB.bak user@192.168.0.0:./

       OR

You can use ftp or graphically tools winscp for transforming backup file to your unix box.

Move the backup file to you backup directory before restoring on linux:

At this point, the backup file is on your Linux server in your user's home directory. Before restoring the database to SQL Server, you must place the backup in a subdirectory of /var/opt/mssql.

Connect to linux server:

ssh user@192.168.0.0

Enter super user mode.

sudo su

Create a new backup directory. The -p parameter does nothing if the directory already exists.

mkdir -p /var/opt/mssql/backup

Move the backup file to that directory. In the following example, the backup file resides in the home directory of user. Change the command to match the location and file name of your backup file.

mv /home/user/DB.bak /var/opt/mssql/backup/

Exit super user mode.

Exit

Restore your database on Linux:
To restore the database backup, you can use the RESTORE DATABASE Transact-SQL (TQL) command.
In the same terminal, launch sqlcmd. The following example connects to the local SQL Server instance with the SA user. Enter the password when prompted, or specify the password by adding the -P parameter.
sqlcmd -S localhost -U SA
At the sql>1 prompt, enter the following RESTORE DATABASE command, pressing ENTER after each line (you cannot copy and paste the entire multi-line command at once). Replace all occurrences of DB with the name of your database.
Sqlcmd>
RESTORE DATABASE DB
FROM DISK = '/var/opt/mssql/backup/DB.bak'
WITH MOVE 'YourDB' TO '/var/opt/mssql/data/DB.mdf',
MOVE 'YourDB_Log' TO '/var/opt/mssql/data/DB_Log.ldf'
GO
You should get a message the database is successfully restored.
Verify the restoration by listing all of the databases on the server. The restored database should be listed.
SELECT Name FROM sys.Databases
GO

Run other queries on your migrated database. The following command switches context to the DB database and selects rows from one of its tables.
USE DB
SELECT * FROM Table
GO

When you are done using sqlcmd, type exit.
Once complete your wok then exit with ssh session also.

==================Happy Learning================

Step By Step configuration of Logical standby database using RMAN


Primary Database Name = PRIMDB
Secondary Database Name = SECODB

1.     Make sure that the primary database in archive log mode :
Shut immediate;
Startup mount;
Alter database archive log;
Alter database open;

2.     Make sure that force logging is enabled in the primary server :
Alter database force logging;

3.     Configure TNSNAMES.ora on the Primary Server :
PRIMDB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.X.X)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = PRIMDB)
    )
  )
SECODB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.X.X)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = SECODB)
      (UR=A)
    )
  )

4.     Following parameters needs to be added to the Primary Database(PRIMDB) Pfile :
PRIMDB.__db_cache_size=251658240
PRIMDB.__java_pool_size=4194304
PRIMDB.__large_pool_size=4194304
PRIMDB.__shared_pool_size=142606336
PRIMDB.__streams_pool_size=8388608
*.audit_file_dest='D:\oracle\product\10.2.0/admin/PRIMDB/adump'
*.background_dump_dest='D:\oracle\product\10.2.0/admin/PRIMDB/bdump'
*.compatible='10.2.0.3.0'
*.control_files='D:\oracle\product\10.2.0/oradata/PRIMDB/\control01.ctl','D:\oracle\product\10.2.0/oradata/PRIMDB/\control02.ctl','D:\oracle\product\10.2.0/oradata/PRIMDB/\control03.ctl'
*.core_dump_dest='D:\oracle\product\10.2.0/admin/PRIMDB/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='PRIMDB'
*.db_unique_name='PRIMDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMDBXDB)'
*.FAL_CLIENT='PRIMDB' (Deprecated Parameter in 11g)
*.FAL_SERVER='SECODB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIMDB,SECODB)'
*.log_archive_dest_1='location=D:\oracle\product\10.2.0\archive\PRIMDB VALID_FOR=(all_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMDB'
*.LOG_ARCHIVE_DEST_2='SERVICE=SECODB LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=SECODB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='Arc_%d_%s_%t_%r'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='D:\ORACLE\PRODUCT\10.2.0\ORADATA\SECODB','D:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMDB'
*.DB_FILE_NAME_CONVERT='D:\ORACLE\PRODUCT\10.2.0\ORADATA\SECODB','D:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMDB'
*.nls_date_format='dd/MM/yyyy'
*.open_cursors=300
*.pga_aggregate_target=148897792
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_max_size=419430400
*.sga_target=419430400
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\product\10.2.0/admin/PRIMDB/udump'

5.     Create spfile from the newly created pfile  and start the database using spfile :
Shut immediate;
Startup nomount;
Create spfile from pfile;
Shut immediate;
Startup

6.     If you have no password file the create a password file by using orapwd utility :
orapwd file=PWDPRIMDB.ora password=oracle entries=5 ignorecase=y

7.     Now connect to database using rman and backup the database using following script.
rman target /
configure default device type to disk;
configure device type disk parallelism 1;
configure retention policy to redundancy 1;
configure channel device type disk maxpiecesize=1 G format  'D:\RMAN\PRIMDB_FULL_%t_%s_%p.rec';
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to 'D:\RMAN\PRIMDB_C_%F.rec';
crosscheck archivelog all;
delete noprompt expired archivelog all;
crosscheck backup;
delete noprompt obsolete;
delete noprompt expired backup;
run {
       allocate channel c1 type disk;
       backup  database plus archivelog;
       backup current controlfile for standby ;
       sql 'alter system archive log current';
       backup  archivelog all;
       release channel c1;
}

8.     Now copy the backup files generated in the D:\RMAN folder and keep it in same directory in the secondary server (SECODB). The directory structure in both the primary and standby server should be same.




9.     Configure TNSNAMES.ora on the Secondary Server (SECODB) :
SECODB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.X.X)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = SECODB)
    )
  )

PRIMDB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.X.X)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = PRIMDB)
      (UR=A)
    )
  )

10.   Now change the secondary database(SECODB) parameter file :
PRIMDB.__db_cache_size=251658240
PRIMDB.__java_pool_size=4194304
PRIMDB.__large_pool_size=4194304
PRIMDB.__shared_pool_size=142606336
PRIMDB.__streams_pool_size=8388608
*.audit_file_dest='D:\oracle\product\10.2.0/admin/SECODB/adump'
*.background_dump_dest='D:\oracle\product\10.2.0/admin/SECODB/bdump'
*.compatible='10.2.0.3.0'
*.control_files='D:\oracle\product\10.2.0/oradata/SECODB/\control01.ctl','D:\oracle\product\10.2.0/oradata/SECODB/\control02.ctl','D:\oracle\product\10.2.0/oradata/SECODB/\control03.ctl'
*.core_dump_dest='D:\oracle\product\10.2.0/admin/SECODB/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='PRIMDB'
*.db_unique_name='SECODB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=SECODBXDB)'
*.FAL_CLIENT='SECODB' (Deprecated Parameter in 11g)
*.FAL_SERVER='PRIMDB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(SECODB,PRIMDB)'
*.log_archive_dest_1='location=D:\oracle\product\10.2.0\archive\SECODB VALID_FOR=(all_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=SECODB'
*.LOG_ARCHIVE_DEST_2='SERVICE=PRIMDB LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMDB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='Arc_%d_%s_%t_%r'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='D:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMDB','D:\ORACLE\PRODUCT\10.2.0\ORADATA\SECODB'
*.DB_FILE_NAME_CONVERT='D:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIMDB','D:\ORACLE\PRODUCT\10.2.0\ORADATA\SECODB'
*.nls_date_format='dd/MM/yyyy'
*.open_cursors=300
*.pga_aggregate_target=148897792
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_max_size=419430400
*.sga_target=419430400
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='D:\oracle\product\10.2.0/admin/SECODB/udump'

11.   At the standby server create an oracle service using oradim :
Windows:
oradim –new –sid SECODB –intpwd oracle –startmode manual -pfile D:\oracle\product\10.2.0\db_1\database\initSECODB.ora

        Linux
        export ORACLE_SID=SECODB

12.   Create the directory structure for the secondary database (i.e. BDUMP,UDUMP etc) and Startup the secondary database in nomount mode.
startup nomount;

13.   At secondary server end in command prompt run the following script :
rman TARGET sys/oracle@PRIMDB  AUXILIARY /
LIST BACKUP OF CONTROLFILE;
LIST COPY OF CONTROLFILE;
RUN
{
 DUPLICATE TARGET DATABASE FOR STANDBY
 NOFILENAMECHECK
 DORECOVER;
}

                You can restore the database without taking the backup (as shown in Step 7) in 11gr2 by the following command:          
rman TARGET sys/oracle@lvgiprod  AUXILIARY sys/oracle@prodods

       Duplicate target database for standby from active database;


*********************************************************************************************************************************************************
Note :
                As of 11.2.0.2.0 you can connect to the target with “connect target”; however, if you don’t specify the username a duplication to standby will fail later with “invalid username/password”.
While this command is running I like to tail the standby alert log and see what is going and watch for errors.Note that it is normal and OK to get “ORA-27037: unable to obtain file status” on the online and standby log files. To perform the duplication in parallel to improve performance you can allocate primary and standby channels and then run the duplicate command.
run
{
allocate channel chan1 type disk;
allocate channel chan2 type disk;
allocate channel chan3 type disk;
allocate channel chan4 type disk;
allocate auxiliary channel aux1 type disk;
allocate auxiliary channel aux2 type disk;
allocate auxiliary channel aux3 type disk;
allocate auxiliary channel aux4 type disk;
duplicate target database for standby from active database;
}

***********************************************************************************************************************************************************





14.   If the script completes without any error then the standby server (Physical Standby) is created and in mount mode. Check the log sequence number of both the server are matching or not (it might take some time) :
archive log list;
select NAME, OPEN_MODE, DB_UNIQUE_NAME, DATABASE_ROLE from v$database;

NAME         OPEN_MODE            DB_UNIQUE_NAME            DATABASE_ROLE
-------      ------------         ----------------            -----------------------
PRIMDB        MOUNT               SECODB                    PHYSICAL STANDBY


15.   If log sequence match then add standby redo log files(SECODB). you need to calculate no of logfiles as (#_logfiles_in_primary+1). I have 3 logfiles in Primary so I need to add 4 standby logfiles in secondary. You can also add standby logfiles in primary server as well.
ALTER DATABASE ADD STANDBY LOGFILE group 4 ('D:\oracle\product\10.2.0\oradata\SECODB\REDO04.log') SIZE 50M;

16.   Add following entries in to the primary server :
                ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE SCOPE=BOTH;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=D:\ORACLE\PRODUCT\10.2.0\ARCHIVE\STDBY_PRIMDB VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PRIMDB' SCOPE=BOTH;

17.   In the stand by server add the following entries :
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE SCOPE=BOTH;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=D:\ORACLE\PRODUCT\10.2.0\ARCHIVE\STDBY_SECODB VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=SECODB'SCOPE=BOTH;

18.   Now start the primary server (PRIMDB) and execute the following command :
alter database add supplemental log data;  
execute dbms_logstdby.build;
alter system switch logfile;

19.   Shut down the database and start the standby server in nomount stage
20.   Now mount the standby database using the following command :
alter database mount standby database;
alter database recover to logical standby SECODB;

21.   Wait for the command to complete and after completing shutdown the database and start the standby database in mount mode :
shut immediate;
startup mount;

22.   Open the database in resetlog mode :
alter database open resetlogs;
select NAME, OPEN_MODE, DB_UNIQUE_NAME, DATABASE_ROLE from V$database ;

NAME          OPEN_MODE            DB_UNIQUE_NAME                    DATABASE_ROLE
---------     ----------           -----------------------           ------------------------
SECODB        READ WRITE           SECODB                            LOGICAL STANDBY

23.   Start Log apply process (SECODB):
alter database start logical standby apply immediate;








IMPORTANT VIEWS:

SELECT * FROM DBA_LOGSTDBY_UNSUPPORTED;          
SELECT * FROM DBA_LOGSTDBY_NOT_UNIQUE;           
SELECT * FROM DBA_LOGSTDBY_PARAMETERS;           
SELECT * FROM DBA_LOGSTDBY_PROGRESS;             
SELECT * FROM DBA_LOGSTDBY_LOG;                      
SELECT * FROM DBA_LOGSTDBY_SKIP_TRANSACTION;
SELECT * FROM DBA_LOGSTDBY_SKIP;                     
SELECT * FROM DBA_LOGSTDBY_EVENTS;                   
SELECT * FROM DBA_LOGSTDBY_HISTORY; 
SELECT * FROM V$LOGSTDBY;
SELECT * FROM V$LOGSTDBY_STATS;

SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
(SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL
WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND THREAD# = LOCAL.THREAD#);


SQL> select applied_thread#,applied_scn,to_char(applied_time,'dd/mm/yyyy hh12:mi:ss'),newest_scn,applied_sequence#,newes
t_sequence# from DBA_LOGSTDBY_PROGRESS;

APPLIED_THREAD# APPLIED_SCN TO_CHAR(APPLIED_TIM NEWEST_SCN APPLIED_SEQUENCE# NEWEST_SEQUENCE#
--------------- ----------- ------------------- ---------- ----------------- ----------------
              1     2159531 16/01/2013 12:47:09    2185744               206              211

Add a Object to Skip List:

Stop Log Apply Process
alter database stop logical standby;

execute dbms_logstDBY.SKIP(STMT => 'DML',SCHEMA_NAME => 'INS', OBJECT_NAME =>'SERVICE_TRANSACTION_LOG');

start apply process

Disable/Enable Guard:
Alter database gurad standby; (standby mode)

Alter database guard none; (None)

Alter database guard all; (enable)


Alter session disable guard;

Alter session enable guard;