In a previous post I talked about the how the use of an ORM would have saved development time. Having made the statement I thought I would share how the UI service layer we introduced still received a considerable benefit from the introduction of an in-memory cache.
The constructor for the service is shown below, as you can see pretty standard really, the interesting interface is the ICacheProvider. As I said in the previous post this service interface and implementation would not exist if an ORM had been used.
The ICacheProvider interface is literally an abstraction around the MemoryCache provided by .Net framework, this then allowed a null caching provider to be defined as well as testing isolation.
This was changed to the null implementation for the first part of the test:
The test involved loading some average sized data (300 odd widgets) and monitoring how many sql statement are executed against the database. The test was first performed with the null cache provider and then with the in-memory cache provider. I monitored the performance using SQL Server Profiler and the application log file. The profiler was configured with the following filter criteria:
Using the null cache provider produced the following results in SQL Server Profiler, the highlighted area shows there were over 3200 individual SQL statements executed!
Using the in-memory cache provider produced the following results, the highlighted area shows only 1700 individual SQL statements executed, still high but a lot better.
So when caching is enabled for the service we are seeing an decrease of approximately 40% in the number of calls to the database.
This translate to a time saving of approximately 25% according to the log file:
The log file gives the impression the load times for data aren't that bad, but this is on a dev machine which is hosting the database locally, so the times are still high for the amount of data being loaded. What's interesting is the performance on the client network, they are seeing average load times of 20 seconds - WTF!
This is simply because of the remote nature of the database and quality of network infrastructure, the client is happy with the current performance so the code is 'good enough'.
The constructor for the service is shown below, as you can see pretty standard really, the interesting interface is the ICacheProvider. As I said in the previous post this service interface and implementation would not exist if an ORM had been used.
The ICacheProvider interface is literally an abstraction around the MemoryCache provided by .Net framework, this then allowed a null caching provider to be defined as well as testing isolation.
public interface ICacheProvider { bool Add(CacheItem item, CacheItemPolicy policy); object AddOrGetExisting(string key, object value, CacheItemPolicy policy, string regionName); bool Add(string key, object value, DateTimeOffset absoluteExpiration, string regionName); bool Add(string key, object value, CacheItemPolicy policy, string regionName); CacheItem AddOrGetExisting(CacheItem item, CacheItemPolicy policy); object AddOrGetExisting(string key, object value, DateTimeOffset absoluteExpiration, string regionName); object Get(string key, string regionName); CacheItem GetCacheItem(string key, string regionName); }Normally the application is configured to use the in-memory cache provider via the DI setting in the boot-strapper (we used NInject for the IoC):
The test involved loading some average sized data (300 odd widgets) and monitoring how many sql statement are executed against the database. The test was first performed with the null cache provider and then with the in-memory cache provider. I monitored the performance using SQL Server Profiler and the application log file. The profiler was configured with the following filter criteria:
Using the null cache provider produced the following results in SQL Server Profiler, the highlighted area shows there were over 3200 individual SQL statements executed!
Using the in-memory cache provider produced the following results, the highlighted area shows only 1700 individual SQL statements executed, still high but a lot better.
So when caching is enabled for the service we are seeing an decrease of approximately 40% in the number of calls to the database.
This translate to a time saving of approximately 25% according to the log file:
The log file gives the impression the load times for data aren't that bad, but this is on a dev machine which is hosting the database locally, so the times are still high for the amount of data being loaded. What's interesting is the performance on the client network, they are seeing average load times of 20 seconds - WTF!
This is simply because of the remote nature of the database and quality of network infrastructure, the client is happy with the current performance so the code is 'good enough'.
Comments
Post a Comment