2006 january library microsoft pattern pattern practice practice




















To ensure synchronization of states in the backing-store and in-memory cache. The Cache Item is stored in the in-memory hash table and has the following information: Data that needs to be cached. Key to represent the Cached Data Scavenging Priority Expiration Policies RefreshAction object that can be used to refresh an expired item from the cache This in-memory HashTable provides a locking strategy when adding new items if not found in the Hash table.

Steps in Configuring the Caching Application Block Configuring the Cache Application Block involves 3 processes Add the Caching Application Block to your application configuration Create a cache manager for each set of data to be cached Designate one as the default cache manager The following figure represents a Configured Cached Application Block.

This will activate the Enterprise Library Configuration Manager. The application configuration file typically named either App. A new application configuration file can be created or an existing one can be opened where Caching related settings will be stored. Right-click the application root node, point to New , and click Caching Application Block Configuration.

This generates a Caching subtree. This subnode can be configured to add multiple instances of the Cache Manager with each instance referencing either an in-memory cache, a Database or an Isolated Storage. Each Cache Manager Instance has the following attributes ExpirationPollFrequencyInSecond: Represents value in seconds to set the frequency of the timer that regulates how frequently BackgroundScheduler checks for expired items.

The minimum time is 1 second and the default time is 60 seconds MaximumElementsInCacheBeforeScavenging Sets the maximum number of elements that can be in the cache before scavenging begins. Default is elements. Default is 10 elements. Each cache manager can either be configured to store data only in memory, or it can be configured to store data both in memory and in persistent storage.

Note: Since isolated storage is always segregated by user, server applications must impersonate the user making a request to the application Isolated storage is appropriate in the following situations: Persistent storage is required and the number of users is small.

The overhead of using a database is significant or No database facility exists. Scenarios where Isolated Storage should not be used: Isolated storage should not be used to store high-value secrets, such as unencrypted keys or passwords. Isolated storage should not be used to store configuration and deployment settings, which administrators control.

FromMinutes 5 ; Note: Defaults Scavenging priority: Normal No expiration When adding a second item with the same key as an existing item replaces the existing item When configured to use a persistent backing store, objects added to the cache must be serializable Flushing the Cache The following code shows how to use the Flush method. Remove order. GetData 12 ; Loading the Cache There are two methods you can use for loading data: Proactive loading.

This method retrieves all the required data and caches it for the lifetime of the application. Advantages Application performance improves because cache operations are optimized Application response times also improve because all the data is cached Disadvantages Does not result in most optimized since much of the state is cached even though it may not all be required The Implementation could turn out to be more complex than conventional techniques Reactive loading.

This method retrieves data only when requested by the application and then caches it for future requests. Advantages System resources are not misused.

Results in an optimized caching system since requested items are only stored. Checks need to be made every time to ensure the item is in cache The Expiration Policies The Caching Application Block's expiration process is performed by the BackgroundScheduler. Expirations; Following are the Expiration policies Time-based expirations You should use time-based expiration when volatile cache items-such as those that have regular data refreshes or those that are valid for only a set amount of time-are stored in a cache.

Time Based Expirations are of three types Absolute. Allows you to define the lifetime of an item by specifying the absolute time for an item to expire. Simple-You define the lifetime of an item by setting a specific date and time for the item to expire. Extended-You define the lifetime of an item by specifying expressions such as every minute, every Sunday, expire at AM on the 15th of every month, and so on.

Allows you to define the lifetime of an item by specifying the interval between the item being accessed and the policy defining it as expired. This means the item expires after the specified time has elapsed from when the item was last accessed. Normal, null, expireTime ; Extended format. This allows you to be very detailed about when an item expires.

For example, you can specify that an item expire every Saturday night at PM, or on the third Tuesday of the month.

Extended formats are listed in the ExtendedFormat. If a dependency changes, the cached item is invalidated and removed from the cache. File dependency. This means the item expires after a specific file has been modified. The following can be configured Removal of expired items occurs on a background thread You can set the frequency of how often this thread will run looking for expired items Count of the cached items to remove during the scavenging process.

The BackGroundScheduler performs a: major sort on priority minor sort on LTA Scavenging is done in one single pass Expiration is a two-part process Marking A copy of the hash table is made. Every CachedItem is checked for expiry. If item is to be expired, it is flagged. Sweeping Every flagged item is checked if it has been accessed in the mean time. If accessed it is kept in cache. If not, it is removed. The concrete implementation needs to ensure that the backing store remains intact and functional when any operation that accesses the backing store causes any exceptions.

Add a new Expiration Policy The following interfaces need to be implemented in case one needs to provide custom expiration policies ICacheItemExpiration: This interface represents an application-defined rule governing how and when a CacheItem object can expire. ICacheRefreshAction: This interface refreshes an expired cache item.

NOTE: If you want to add new features to the application block, you can do so by modifying the source code the installer includes both the source code and the binaries Instrumenting the Caching Application Block The Caching Application block also incorporates the following instrumentation: Caching Application Block Performance Counters.

The Caching Application Blocks records key metrics by writing to the Microsoft Windows operating system performance counters. This performance counter shows the number of entries in the cache. This performance counter shows the number of cache hits per second. This performance counter shows the number of cache misses per second. Cache Hit Ratio. This performance counter shows the ratio of hits from all cache calls.

Cache Total Turnover Rate. This performance counter shows the number of additions and removals to the total cache per second. It signifies that an internal failure has occurred. It includes the string property ConfigurationFilePaththat contains the path of the main configuration file. Therefore, the runtime could schedule unnecessary work during the blocking operation, which leads to decreased performance. Accordingly, when you perform a blocking operation before you cancel parallel work, the blocking operation can delay the call to cancel.

This causes other tasks to perform unnecessary work. When the predicate function returns true , the parallel work function creates an Answer object and cancels the overall task. The new operator performs a heap allocation, which might block. The following example shows how to prevent unnecessary work, and thereby improve performance. This example cancels the task group before it allocates the storage for the Answer object.

These data structures are useful in many cases, for example, when multiple tasks infrequently require shared access to a resource. This example can also lead to poor performance because the frequent locking operation effectively serializes the loop. In addition, when a Concurrency Runtime object performs a blocking operation, the scheduler might create an additional thread to perform other work while the first thread waits for data.

If the runtime creates many threads because many tasks are waiting for shared data, the application can perform poorly or enter a low-resource state. The PPL defines the concurrency::combinable class, which helps you eliminate shared state by providing access to shared resources in a lock-free manner. The combinable class provides thread-local storage that lets you perform fine-grained computations and then merge those computations into a final result.

You can think of a combinable object as a reduction variable. This example scales because each thread holds its own local copy of the sum. This example uses the concurrency::combinable::combine method to merge the local computations into the final result.

For the complete version of this example, see How to: Use combinable to Improve Performance. For more information about the combinable class, see Parallel Containers and Objects. False sharing occurs when multiple concurrent tasks that are running on separate processors write to variables that are located on the same cache line. When one task writes to one of the variables, the cache line for both variables is invalidated. Each processor must reload the cache line every time that the cache line is invalidated.

Therefore, false sharing can cause decreased performance in your application. The following basic example shows two concurrent tasks that each increment a shared counter variable. To eliminate the sharing of data between the two tasks, you can modify the example to use two counter variables.

This example computes the final counter value after the tasks finish. However, this example illustrates false sharing because the variables count1 and count2 are likely to be located on the same cache line. One way to eliminate false sharing is to make sure that the counter variables are on separate cache lines. The following example aligns the variables count1 and count2 on byte boundaries.

We recommend that you use the concurrency::combinable class when you must share data among tasks. The combinable class creates thread-local variables in such a way that false sharing is less likely. When you provide a lambda expression to a task group or parallel algorithm, the capture clause specifies whether the body of the lambda expression accesses variables in the enclosing scope by value or by reference. When you pass variables to a lambda expression by reference, you must guarantee that the lifetime of that variable persists until the task finishes.

Depending on the requirements of your application, you can use one of the following techniques to guarantee that variables remain valid throughout the lifetime of every task. The following example passes the object variable by value to the task. Therefore, the task operates on its own copy of the variable. In a similar way, you may have services that your application uses repeatedly, such as an e-mail message sending service or a data transformation service.

Dependency injection can act as a service location facility to help your application retrieve an instance either a new instance or an existing instance of the service at run time. Each of these examples effectively describes a dependency of one part of the application on another, and resolving these dependencies in a way that does not tightly couple the objects is the aim of the Dependency Inversion principle.

Although the principles of Dependency Inversion have been around for a long time, features to help developers implement it in applications running on the Microsoft platform are relatively recent. In fact, there is a story that a renowned developer in the Java world, when visiting the Microsoft campus, remarked that the general belief was that nobody at Microsoft could spell "Dependency Injection.

A quick glance at our home page on MSDN will illustrate the broad range of assets we provide. Among these assets are several products that make use of the Dependency Injection pattern, including Enterprise Library, composite application frameworks, and software factories. During development of these assets, in particular the original Composite Application Block CAB , it became clear that a reusable and highly configurable dependency injection mechanism was required—and so the team built the original version of Object Builder.

However, it is quite difficult to use. It requires a great many parameters that take complex objects, and it exposes a range of events that you must handle to apply the configuration you require.

Initial attempts to document Object Builder as part of the CAB project soon revealed that this was going to be an uphill task. In addition, Object Builder was rather more than a dependency injection container, and it seemed overkill in terms of the common requirements for implementing the DI and IoC patterns. During the development of Enterprise Library 4. It was also fine-tuned for use in the first major dependency injection mechanism from Microsoft that is aimed squarely at developers who want to implement the DI and IoC patterns.

Object Builder is the foundation for Unity, a lightweight, extensible dependency injection container that supports constructor injection, property injection, and method call injection. Unity provides capabilities for simplified object creation, especially for hierarchical object structures and dependencies; abstraction of requirements at run time or through configuration; simplified management of crosscutting concerns; and increased flexibility by deferring component configuration to the container.

It has a service location capability, and allows clients to store or cache the container—even in ASP. NET Web applications. Unity has also continued to evolve while remaining backward compatible; you can use it to enable features within Enterprise Library, as well as use it as a stand-alone DI container. In the most recent release, it offers facilities to implement instance and type interception through a plug-in extension that allow implementations of Aspect Oriented Programming techniques such as policy injection.

Unity has also spawned other DI container implementations aimed at specific tasks and requirements, such as an extremely lightweight implementation designed for use in mobile devices and smart phones.

Meanwhile, planned future developments in the Unity and Enterprise Library arena include features to open up Enterprise Library to other third party container mechanisms, while providing additional extensions that enable new capabilities for Unity.

Leaving this historical distraction and returning to the hypothetical application, how can you apply the Dependency Inversion principle to achieve the aims, discussed earlier, of separation of concerns, abstraction, and loose coupling?

The answer is to configure a dependency injection container, such as Unity, with the appropriate types and type mappings and allow the application to retrieve and inject instances of the appropriate objects at run time. Figure 2 illustrates how you can use the Unity application block to implement this container. In this case, you populate the container with type mappings between interface definitions for the data components and logging components, and the specific concrete implementations of these interfaces that you want the application to use.

Figure 2 Dependency injection can select the appropriate components at run time based on configuration of the container.

At run time, the business layer will query the container to retrieve an instance of the correct data layer component, depending on its current mapping. The data layer will then query the container to obtain an instance of the appropriate logging component, depending on the mapping stored for that interface type. As an alternative, the data and logging components may inherit from respective base classes, and registrations in the container can map between these base types and the inheriting concrete types.

This container-driven approach to resolving types and instances means that the developer is free to change the implementations for the data and logging components, as long as these implementations provide the required functionality and expose the appropriate interface for example, by implementing the mapped interface or inheriting from the mapped base class. The container configuration may be set in code at run time using methods of the container that register types, type mappings, or existing instances of objects.

Alternatively, you can populate the container by loading the registrations from a configuration source or a file, such as the web. When you want to register more than one instance of a type, you can use a name to define each one and then resolve the different types by specifying the name.

The registration can also specify the lifetime of the object, making it easy to achieve service location-style capabilities by registering the service object as a singleton or with a specific lifetime, such as per-thread. The following code example shows some examples of registering type mappings with the container:.

Note: The code examples reference classes and types using just the class name. You can use type alias definitions within the configuration file to alias the fully qualified type names of classes, which simplifies container registration when you're using a configuration file.

To retrieve instance of an object, you simply query the container by specifying the type, the interface type, or the base class type and the name, if you registered the type using a name , as shown in the next example.



0コメント

  • 1000 / 1000