Tutorial :What is the best approach for decoupled database design in terms of data sharing?


I have a series of Oracle databases that need to access each other's data. The most efficient way to do this is to use database links - setting up a few database links I can get data from A to B with the minimum of fuss. The problem for me is that you end up with a tightly-coupled design and if one database goes down it can bring the coupled databases with it (or perhaps part of an application on those databases).

What alternative approaches have you tried for sharing data between Oracle databases?

Update after a couple of responses...

I wasn't thinking so much a replication, more on accessing "master data". For example, if I have a central database with currency conversion rates and I want to pull a rate into a separate database (application). For such a small dataset igor-db's suggestion of materialized views over DB links would work beautifully. However, when you are dynamically sampling from a very large dataset then the option of locally caching starts to become trickier. What options would you go for in these circumstances. I wondered about an XML service but tuinstoel (in a comment to le dorfier's reply) rightly questioned the overhead involved.

Summary of responses...

On the whole I think igor-db is closest, which is why I've accepted that answer, but I thought I'd add a little to bring out some of the other answers.

For my purposes, where I'm looking at data replication only, it looks like Oracle BASIC replication (as opposed to ADVANCED) replication is the one for me. Using materialized view logs on the master site and materialized views on the snapshot site looks like an excellent way forward.

Where this isn't an option, perhaps where the data volumes make full table replication an issue, then a messaging solution seems the most appropriate Oracle solution. Oracle Advanced Queueing seems the quickest and easiest way to set up a messaging solution.

The least preferable approach seems to be roll-your-own XML web services but only where the relative ease of Advanced Queueing isn't an option.


Streams is the Oracle replication technology. You can use MVs over database links (so database 'A' has a materialized view of the data from database 'B'. If 'B' goes down, the MV can't be refreshed but the data is still in 'A').

Mileage may depend on DB volumes, change volumes...


It looks to me like it's by definition tightly coupled if you need simultaneous synchronous access to multiple databases.

If this is about transferring data, for instance, and it can be asynchronous, you can install a message queue between the two and have two processes, with one reading from the source and the other writing to the sink.


The OP has provided more information. He states that the dataset is very large. Well how large is large? And how often are the master tables changed?

With the use of materialized view logs Oracle will only propagate the changes made in the master table. A complete refresh of the data isn't necessary. Oracle streams also only communicate the modifications to the other side.

Buying storage is cheap, so why not local caching? Much cheaper than programming your own solutions.

An XML service doesn't help you when its database is not available so I don't understand why it would help? Oracle has many options for replication, explore them.


I've build xml services. They provide interoperability between different systems with a clear interface (contract). You can build a xml service in C# and consume the service with Java. However xml services are not fast.


Why not use Advanced Queuing? Why roll your own XML service to move messages (DML) between Oracle instances - It's already there. You can have propagation move messages from one instance to another when they are both up. You can process them as needed in the destination servers. AQ is really rather simple to set up and use.


Why do they need to be separate databases?

Having a single database/instance with multiple schemas might be easier.

Keeping one database up (with appropriate standby databases etc) will be easier than keeping N up.


What kind of immediacy do you need and how much bi-directionality? If the data can be a little older and can be pulled from one "master source", create a series of simple ETL scripts run on a schedule to pull the data from the "source" database into the others.

You can then tailor the structure of the data to feed the needs of the client database(s) more precisely and you can change the structure of the source data until you're blue in the face.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »