Tutorial :How scaleable really is a web-services based architecture?


Whenever someone talks about a services-based architecture, they often mention scalability, often in the same breath. However, it seems that using services adds more overhead, rather than reducing it, since there is now a protocol, like SOAP or REST, involved. So, does a web services based architecture really add performance benefits as the number of users of, say, a web application, scales by perhaps an order of magnitude? Or are the scalability requirements simply offloaded onto the services, rather than the core application?


Scalability and performance are two separate things. Yes, a service based approach does add the overhead of a network protocol, but this is a minimal sacrifice for the benefits of being able to rapidly adopt well-tested services in any application on the domain.

If the overhead of the network is a deal-breaker for the system you want to build then clearly SOA is the wrong choice for you. Remember that not ever service must be accessed over HTTP. I think you would be surprised how fast some protocols (like net.tcp) can be.


Remember that you can scale out a web service to run on multiple servers, without affecting the clients. That doesn't work so well with a tightly-coupled system.


A properly-designed SOA allows each component in the system to work independently of all the others and run asynchronously in parallel, so both performance and scalability (two different things) become limited only by the slowest/least scalable piece in your system, rather than the total time it takes for all components to execute in serial.

SOA is not appropriate for all solutions though, so if you don't see any clear benefit for your particular case, then there may be none.


Web services don't give you scalability for free. In fact it's pretty easy to build a service that won't scale.

What they do give you is opportunities to build in scalability. And, by having well defined service interfaces, you can swap out a quick-and-dirty non-scalable implementation of a service with a better implementation when you need it.

The important thing is not to forget the 'A' in 'SOA'. You can make a huge mess by just wantonly creating a bunch of services. Make sure you have an architecture.

One huge step towards scalability is moving away from the basic, synchronous, query/response type services (such as SOAP RPC), towards asynchronous services. See Hohpe and Woolf's 'Enterprise Integration Patterns'


RESTafarians will remind you that REST isn't a prototol--it's an architectural style. REST is a way to use HTTP directly, without the wrappers that SOAP uses, to provide a services model. REST is much closer to the wire that SOAP. That alone doesn't make one better than the other, they both have their place.

The scalability of a services model isn't so much directly related to "services" (as in Web Services with a capital W and a capital S) than it is to the stateless nature of these services. Well built Web apps are also scalable and could be argued to be as scalable as any services-driven architecture.

One of the differences between the two models is that the Web app without "services" is interacting with referenced modules living in the same process at a binary level--no serialization necessary. Web services (SOAP or REST) introduces some level of serialization that adds overhead. This overhead though, is often deemed worthly given the resuse it provides (ie, the same Web services that drive your apps internally can also be used to make business partners happy).

One good architecture is to expose the service classes (not Web services, it can get confusing quickly! Services in this context means the classes that are serving low-level business process) to your local apps directly in process--exploiting the ability to talk to this services binarily. But then, for business partners and other services uses, put a thin Web services layer over these already-tested service classes to offer the same services layer available as Web services.


Okay, so let's define "scaleability" first. Most anything will scale: if you need more capacity, to a first approximation, you can simply throw more hardware at it. But some architectures are "more scaleable" â€" what does that mean? It has to do with cost: an architecture is "more scaleable" if the cost per unit of added capacity is less.

Scaleability in general has three ranges in any architecture: there's a fixed cost for the first part, so over an interval it's linear with a slope of 0; past that point the slope increases, and at some point usually adding capacity is very expensive.

We say a system is "linearly scaleable" if the function describing cost per unit of added capacity is roughly linear over a large range. Obviously, it's desirable for that slope to be less.

So, now, when someone asks about the "scaleability" of an SOA, SOAP or REST or whatever, that's what they're talking about.

Service-oriented architectures are said to be "more scaleable" because adding more capacity is relatively inexpensive, even though, as you say, there may be overhead that makes you need more capacity to service a single user. This is because a big part of the cost and complexity of managing a big system comes from the need to maintain session state, that is, keep track of what's going on between calls. Service-oriented architectures avoid that by making each call relatively independent of the next, with RESTful architectures being the limiting case -- all the session state is encoded in the representation, so the rest of the system doesn't need to know about it. When you want to add capacity, all you need to do is add another server; simple load balancing will be sufficient to route messages to the new one.

(Notice this is also inherently more fault tolerant: if a server dies, the other more or less automagically get the requests, and there's no session to lose.)

Architectures that require lots of session state, like some J2EE systems, tend to scale less well, as you need lots of extra complexity to manage session state.

The sort of limiting case, which you saw in old CGI-based architectures, is the one where each user session requires a full heavyweight process; I've seen systems in the field where fork/exec was 40-50 percent of the load, where there needed to be a complicated software load-balancing rig to make sure that requests always got to the machine that held the session, and where simply running out of process slots was a major issue. One such system required buying a whole new high-end Starfire server to add capacity.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »