Tutorial :Image retrieval optimisation with LINQ



Question:

I'm working on optimising the image retrieval from the database with LINQ.

The approximate image size is 4000 * 1000 pixels which weighs about 400-600KBs.

The images are retrieved through a controller which is being called by a webservice. Web service gets called through jQuery.

The first image that gets retrieved in about 0.7 - 1.5 seconds, when the consequent images take between 3 to 4 seconds.

The code that reads images from the database isn't surrounded in the using { }, I'm struggling to see how to use this clause with methods that simply return a byte array.

Is there any ways to improve the performance here?

Thank you

Edit: I'm going to run the profiling and post the code once I get the results back. Thank you


Solution:1

The way I understand your question, the image is retrieved through a web service, which means that the image is retrieved, binhexed in the SOAP message, send over the wire, unbinhexed on the client side (where your LINQ query is). This will always be very slow. The actual data sent is between 1.5 and 2.0 times larger than the actual image BLOB.

Getting huge BLOBs from a database this way is always a performance nightmare. If you need to query them (you use LINQ), consider querying only the metadata. Once you have the image ID you need, you retrieve the whole image.

However, it will not get perfect quickly. In order to have a better performance, each retrieved image should be cached on your server (after retrieving by the web service) and shouldn't be retrieved more than once. This type of caching should be trivially to implement. Alternatively, you shouldn't store images as BLOBs, but that's likely too late in the design phase now.

Note on using: just use it for any variable that implements IDisposable. It is not said that this causes your performance drain though.


Solution:2

It's very hard to say anything without actual source code. What's even worse, is as hard to say anything sensible with said code: programmers are very bad at finding bottlenecks empirically. What you need is a decent server-side performance profiler (dotTrace, for example) and client-side HTTP analyzer (like Fiddler).

With server-side one you'll see what lines of code take most time to execute and then try to improve based on this knowledge. Maybe, you're pulling excessive amounts of data from the DB, or it lacks proper indexes.

Client-side analyzer will show you what HTTP headers get sent with your response (remember that caching in HTTP is king) and generally how long your requests get executed.


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »