General Microsoft Solutions
General Microsoft Solutions
Microsoft has released to manufacturing its Windows Thin PC client, and plans to make it available to Software Assurance customers starting July 1, company officials said on June 7.
The Wall Street Journal reported tonight that Microsoft–in what would be its most aggressive acquisition in the digital space–was zeroing in on buying Skype for $8.5 billion.
Sources told BoomTown tonight that the deal for the online telephony giant is actually done and will be announced early tomorrow morning. More >
Introduction to Virtual Processors
Windows Server 2008 Hyper-V allows up to four virtual processors in a virtual machine and allows you to configure options on how those virtual processors are balanced across virtual machines. This article describes the options for processor resource control and how to use them.
Hyper-V uses physical processors and cores to provide virtual processors to virtual machines. Each virtual machine starts with a single virtual processor, but you can increase virtual processors to 2 or 4 per virtual machine. Virtual processors are actually threads in the parent partition running on a physical processor. As each virtual machine is powered on, a separate thread is created for each virtual processor in the virtual machine. Each thread can be scheduled by the Virtual Machine Manager (VMM) on separate physical core or processor.
Configuring the number of processors in a virtual machine is done from the virtual machine settings dialog. While you can view the number of virtual processors when the virtual machine is running, you cannot change the number of available processors until the virtual machine is in a powered off state. Figure 1 shows the settings dialog with the Processor hardware node selected, you can see the right hand side has the ability to select the number of processors, in this case 1 or 2 processors are available. This is determined by the number of cores that are available in the parent partition. To be given an option to select 4 processors in a virtual machine the parent partition must have 4 cores (or 4 processors if not multiple cores).
Virtualization allows you to over subscribe the processing limits of the physical hardware. I could only have 4 cores on the physical server, but create and have more than 4 virtual machines running. Each virtual machine would be sharing a virtual processor in the parent partition, but with no limits imposed a virtual machine could consume an entire core in the machine. For example, if I allocate a virtual machine 1 processor on a machine that has a single quad core processor, that virtual machine will have one thread that can consume the equivalent of an entire core or processing time. If it has 2 processors configured it can consume two cores. If you configure the virtual machine to have 4 cores, it could attempt to consume all the processing power of the server and starve the other virtual machines.
The Virtual Machine Manager (VMM) manages the scheduling of threads of all the running virtual machines. By default it attempts to balance the processing evenly across all the cores in the physical machine to get a load balanced processor distribution. As discussed above, it is possible for a virtual machine to consume entire processor cores and starve other threads running on a core. While Hyper-V does not have processor affinity (the ability for the admin to specify the processor core that a thread runs on), you have the ability to tell VMM that you want to apply resource control on a per virtual machine basis to set limits on the virtual machines ability to starve other virtual machines on a single core.
Hyper-V accomplished resource control in different ways:
Setting a reserve on processing resources using a percentage
Setting a maximum on processing resources using a percentage
Setting a relative weight of the virtual machine to others in the system
Virtual Machine Reserve
The virtual machine reserve allows you to specify the percentage of the assigned virtual processor that this virtual machine will be guaranteed on the physical host. This value can range from 1-100% and is relative to the number of processors assigned to the virtual machine. For example, if a physical host has 4 cores and you assign a single processor to a virtual machine, that virtual machine can potentially consume up to an entire processor core of processing time, but it is not guaranteed that the processing time is available at all times.
By setting the reserve value to 100%, a virtual machine will be reserved the equivalent of an entire processor core. If that virtual machine sits idle at 10% most of the time, the other 90% of the processing time is still unavailable to any other virtual machine. Using reserve resource control will limit the amount of available virtual processor resources that can be shared on a Hyper-V host and therefore limit the number of concurrent virtual machines you can power on. If you have 20 virtual machines configured with a single processor on a host with 4 cores and you have each of them set to a reserve of 100%, you can only power on 4 virtual machines.
Reservation should only be used if you want to guarantee a virtual machine processing power. This is typically used on virtual machines that you know will require lots of processing power and has spikes where having a guarantee is extremely important.
Virtual Machine Limit
Virtual Machine limits are the opposite of virtual machine reserve, it allows you to specify the maximum amount of processing power that a virtual machine can consume. This value can range from 100-1% and is relative to the number of processors assigned to the virtual machine. For example, if a physical host has 4 cores and you assign a single processor to a virtual machine, that virtual machine can potentially consume up to an entire processor core of processing time, but there is no limit, so the virtual machine can consume the entire core. By setting the limit value to 10%, that virtual machine will be limited to a maximum of 10% of an entire processor core.
If that virtual machine has a spike in processing it will be limited to no more than 10% of the core and therefore will suffer in performance. Using limit resource control will limit the amount of available virtual processor resources that can be consumed on a Hyper-V host and allow you to control the number of concurrent virtual machines you can power on and have defined amounts of processing power per virtual machine. If you have 20 virtual machines configured with a single processor on a host with 4 cores and you have each of them set to a limit of 10%, you have only consumed the equivalent of two processor cores and can still power on another 20 virtual machines (assuming you have enough memory and disk resources). This also means that if you power on only a single virtual machine it can never consume more than 10% of a single core, so you are performance limiting the virtual machine even when you have excess processing capacity.
Processor limits should only be used if you want to limit a virtual machine processing power. This is typically used by web service providers who want the ability to maximize the number of virtual machines on a host, but provide a specific level of service.
Relative weight allows you to specify a virtual machine has processing priority without applying a specific limit or reserve. This value can range from 0-10000. Relative weight is used to make a determination of who should get processing resources when multiple requests are being made at a time. For example, by default if you have 4 virtual machines running, all 4 will get equal sharing of the available processing power because they all have the same relative weight. If you have a machine that is more important than the others and want to give that virtual machines request for processing power priority over others, you can assign that VM a higher weight than the other virtual machines. This means that if a virtual machine with a higher weight needs resources, it gets them, but if it is not using them, other virtual machines can use them. This is all still limited by the number of processors assigned to the virtual machine. If you assign a virtual machine a single virtual processor and a high relative weight, it can still only consume a maximum of a single core.
Reinstall the Distributed Transaction Coordinator (MSDTC) with the following steps (Win2k/WinXP/Vista):
Net stop msdtc
Delete registry keys:
[HKEY_LOCAL_MACHINE \SOFTWARE \Microsoft \MSDTC]
[HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \MSDTC]
Net start msdtc
More Info MS KB Q240038
More Info MS KB Q279786
More Info MS KB Q873160
Reset Recovery Log
Distributed Transaction Coordinator (MSDTC) uses the following log file for storing transaction-related recovery information along with all other MS DTC recovery information (WinNT4 also uses Dtcxatm.log).
If the location of the log file is faulty (non existing or no permission) or the MSDTC.LOG is corrupted, then it will keep the service from starting and give errors like:
Event ID : 7024
Source : Service Control Manager
Description: The MSDTC service terminated with service specific error 3221229584.
Event ID : 4163
Source : MSDTC
Description: MS DTC log file not found. After ensuring that all Resource Managers coordinated by MS DTC have no indoubt transactions, please run msdtc -resetlog to create the log file.
More Info MS KB Q205069
To reset the log-file :
If possible start the computer in safemode
Open the %SystemRoot%\System32\DTCLog folder. (Unless you have changed the default location)
If a Msdtc.log file exists in the folder, rename it to Msdtc.old.
Use Notepad to save an empty file as Msdtc.log in the folder.
Open a CMD prompt and type: msdtc -resetlog and press Enter.
Fix corrupted COM catelog
The Distributed Transaction Coordinator might fail to perform properly if the COM+ Catalog have become corrupted.
A year ago today, I was in New York City at the official launch of Windows 7. After a long public beta, and with the released code widely available months earlier, there wasn’t much left to unveil at that point, except for an impressive collection of PCs from OEM partners designed for the new operating system. Most of the Microsoft employees I talked to that day seemed relaxed and genuinely confident. A year later, that confidence is still there. Windows 7 is still selling like gangbusters and the public seems pleased. Back in August, I said: “Windows 7 has been a quiet success, maybe even a phenomenon.” That’s still true.
In my original review, I called Windows 7 “as close to an essential upgrade as I have ever seen,” and I predicted that it would improve with age. A year later, I can already see many of those improvements.
From the standpoint of stability and reliability, Windows 7 has exceeded expectations. The hardware ecosystem was ready, after having been burned badly by Vista, and the Windows Core team did a good job of responding to issues in Windows Vista and Windows Server 2008. With this release, Microsoft might have finally silenced the “Never buy till the first service pack” skeptics. Windows Vista Service Pack 1 was released almost exactly a year after Vista’s consumer launch, and it was desperately needed. Microsoft says it doesn’t plan to finish Windows 7 SP1 until sometime in the first half of next year. That doesn’t seem to bother customers, who have been buying Windows 7 at a rate of 657,000 copies a day over the past year.
One of the biggest under-the-radar improvements to Windows 7 in the past year is the release of Windows Live Essentials 2011. Some reviewers have grumbled about design decisions Microsoft made with the apps in this collection—especially the changes to Messenger—but there’s no question these are full-featured programs, not wimpy starter editions. Photo Gallery is particularly impressive with its extensive set of features for importing, managing, editing, and sharing photos. I don’t think it’s any accident that Apple spent the lion’s share of its time this week on detailed demos of its competing apps in iLife ‘11. I’m looking forward to comparing the two suites when my iLife upgrade arrives in the mail (amazingly, Apple doesn’t offer any way to buy and download iLife).
Even a year later, I continue to be surprised that Windows 7 is so much more efficient than Windows Vista. It uses less disk space than Vista and outperforms it across the board, even on relatively modest hardware.
In the missed-opportunities category, Microsoft deserves special mention for its inability to capitalize on its long history of developing Windows for tablets. Although Windows 7 fully supports touchscreens, the OS itself isn’t well suited for full-time operation with a fingertip. I have three touch-enabled PCs in this house—two all-in-one desktop PCs and a Dell Tablet PC. The touch features feel like a novelty, and I rarely use them. I’m pretty certain that smart people in Redmond are working to make touch features a more natural part of Windows 8, but we’re unlikely to see any of those efforts for at least another year, giving iOS and Android tablets an awfully big head start.
I continue to be amazed and impressed with Windows Media Center. Last week I upgraded our living room Media Center PC with a Ceton InfiniTV tuner, which uses a single CableCARD to tune up to four HD cable channels. (I’ll have a more detailed look at that system next week.) The Media Center interface is fluid and elegant, easily more usable than any alternative, including TiVo, and the whole system has been a joy to use. My sources in Redmond tell me, however, that the Media Center team was essentially disbanded after Windows 7 shipped. I hope that Microsoft is planning a Windows 8 Media Center that will be capable of going head to head with Apple and Google’s TV offerings. If they let that work go to waste, it will be another tremendous missed opportunity.
In the year after Windows Vista was released, I spent an unfortunate amount of time and energy writing posts about how to tweak, tune, and work around its flaws and usability headaches. What I’ve enjoyed most about the last year has been not having to do the same for Windows 7. No, it’s not perfect, but it’s very, very good. Microsoft seems to have figured out, finally, that the best way to design great software is to focus on the user’s experience, not just check off items on a feature list.
If Microsoft follows the playbook and the three-year development cycle it used so successfully for this release, the first beta of Windows 8 will appear roughly a year from now. In fact, the window for feedback that will actually influence the design of the next Windows version is closing soon. What are the flaws in Windows 7 that you want to see addressed? What features are at the top of your must-add list? Leave your comments in the Talkback section.
Kick off your day with ZDNet’s daily e-mail newsletter. It’s the freshest tech news and opinion, served hot. Get it.
IIS caches everything it can to save CPU cycles wherever possible. IIS6 had user-mode file cache, token cache, URI cache, metadata cache and kernel-mode http.sys response cache. These caches are mostly unchanged in IIS 7.0 other than following changes I can think of.
- Static compression module disable kernel caching of response if static compression is enabled for the request but the client requested uncompressed response. This makes sure only compressed response is cached in kernel.
- You might see few changes in performance counters because URI cache module maintains additional pointers to cached files and metadata objects which saves some hashtable lookups.
Native output cache is the new user mode response cache added in IIS7. This module provides similar functionality as provided by the managed output cache module in ASP.NET. Functionality of this module can be controlled by editing system.webServer/caching section or by using IHttpCachePolicy intrinsic. Following properties can be set in system.webServer/caching section.
- enabled – This property tells if output caching is enabled or not for this URL. If disabled, output cache module won’t do anything in ResolveRequestCache and UpdateRequestCache stage. Setting enabled to true doesn’t ensure response caching. Some module must set user cache policy.
- enableKernelCache – Controls if kernel caching is enabled or not for this URL. Output cache module calls IHttpResponse::DisableKernelCache if this property is set to false. Output cache module does kernel caching work in SendResponse stage if no one called DisableKernelCache in the pipeline.Setting enableKernelCache to true doesn’t ensure kernel caching of response. Some module must set the kernel cache policy.
- maxCacheSize – Maximum size of output cache in MB. Value 0 means max cache size is calculated automatically. We use half of available physical memory or available virtual memory which ever is less.
- maxResponseSize – Maximum size of the response in bytes that can be stored in output cache. 0 means no limit.
Although you can set maxCacheSize, maxResponseSize for a URL, output cache module uses values set at root level only. If we have per apppool properties in future, these will be configurable for each application pool. If output cache is enabled, you can control its behavior for different file types by adding profiles for different file extensions. These profiles make output cache module populate IHttpCachePolicy intrinsic which enables user/kernel caching of response. Properties which can be set in a profile are similar to ones available for system.web/caching/outputCacheSettings profiles. Following properties are allowed for system.webServer/caching profiles:
- extension – E.g. “.asp”, “.htm” etc. * is used as wildcard entry. If profile for a particular extension is not found, profile for extension * will be used if present.
- policy – can be DontCache | CacheUntilChange | CacheForTimePeriod | DisableCache (only in server). Output cache module changes IHttpCachePolicy intrinsic depending on value of this property. DontCache means that intrinsic is not set but that doesn’t prevent other modules from setting it and enable caching. In server we have added DisableCache option which makes sure that response is not cached even if some other module sets the policy telling output cache module to cache the response.
- kernelCachePolicy – can be DontCache | CacheUntilChange | CacheForTimePeriod | DisableCache (only in server). As above, DontCache doesn’t prevent other modules from setting kernel cache policy. For static files, static file handler sets kernel cache policy which enable kernel caching of the response. In server DisableCache option makes sure that response doesn’t get cached in kernel.
- duration – duration property is only used when policy or kernelCachePolicy is set to CacheForTimePeriod.
- location – sets cache-control response header for client caching. Cache-control response header is set depending on value of this property as following.
Any | Downstream – public
ServerAndClient | Client – private
None | Server – no-cache
- varyByHeaders – comma separated list of request headers. Multiple responses to requests having different values of these headers will be stored in the cache. You might be returning different responses based on Accept-Language or User-Agent or Accept-Encoding header. All the responses will get cached in memory.
- varyByQueryString – comma separated query string variables. Multiple responses get cached if query string variable values are different in different requests. In server you can set varyByQueryString to “*” (star) which makes output cache module to cache a separate response if any of the query string variable value is different.
varyBy and location headers are used by user mode cache only. These properties have no effect on kernel caching. So if policy is set to DontCache, these properties are not used. To make output cache module cache multiple responses by an asp page for 30 minutes which returns different responses based on value of querystring variable “action” and also based on request header “User-agent”, caching section will look like <caching> <profiles enabled=”true”> <add extension=”.asp” policy=”CacheForTimePeriod” duration=”00:30:00” varyByQueryString=”action” varyByHeaders=”User-Agent”/> </profiles></caching>
Output cache module populates the IHttpCachePolicy intrinsic in BeginRequest stage if a matching profile is found. Other modules can still change cache policy for the current request which might change user-mode or kernel mode caching behavior. Output cache caches 200 responses to GET requests only. If some module already flushed the response by the time request reaches UpdateRequestCache stage or if headers are suppressed, response is not cached in output cache module. Output cache module only caches the response if some other module hasn’t already cached it indicated by IHttpCachePolicy::SetIsCached. Also caching happens only for frequently hit content. Definition of frequently hit content is controlled by frequentHitThreshold and frequentHitTimePeriod properties defined in system.webServer/serverRuntime section. Default values define frequently hit content as ones which are requested twice in 10 seconds.
More details on IHttpCachePolicy coming in a future post.
To some people it comes as a surprise that AWE mechanism is still present and actually could be useful on 64 bit platforms. As you remember the mechanism consists of two parts allocating physical memory and mapping it to the given process’s VAS. The advantage of allocation mechanism is that once physical memory is allocated operating system can’t reclaim it until either the process is terminated or the process frees memory back to the OS. This feature allows an application to control and even avoid paging altogether. Advantage of mapping/unmapping mechanism is that the same physical page could be mapped into different VAS’s regions. As you imaging unmapping is not necessary on 64 bit platforms since we have enough VAS to accommodate all of existing physical memory.
From Operating System theory, OS implements a page table entry, PTE, to describe a mapping of a page in VAS to physical page. Internally physical page is described by page frame number, PFN. Given PFN one can derive complete information about physical page it represents. For example PFN shows to which NUMA node the particular page belongs. OS has a database, collection of PFNs that it manages. If page in VAS is committed, it has PTE which might or might not point to given PFN. Conceptually, page that PTE represents can be either in memory or not, for example swapped out to disk. In the former case it is bound to a given PFN and in latter it is not. In its turn, once a physical page is bound to page in VAS, its PFN points back to PTE.
When OS commits, frees, pages out/in a given PTE or needs to derive some information about it, for example NUMA residency, it has to acquire process’s working set lock – to guarantee stability of PTE to PFN binding. This lock is a rather expensive and might hurt scalability of the process. Latter versions of Windows made this lock as light as possible but avoiding still will benefit application’s scalability..
When allocating physical pages utilizing AWE mechanism we are given a set of PFN entries directly from PFN database – remember that you should not manipulate or modify set of entries you get back nor can you rely on values you get back. OS is required to take a PFN database lock when allocating PFN entries. Using AWE map mechanism you can map allocated PFN entries to the process’s VAS. When mapping occurs PTEs are allocated, bound to PFNs and marked as locked. In this case OS needs to acquire process’s working set lock only ones. When mapping regular pages, OS does it on demand and hence will have to acquire both working set and PFN database lock for every page. Since pages are locked in memory, OS will ignore these PTEs during paging process.
On 64 bit platforms it is better to refer to such pages as locked pages – please don’t confuse them with pages locked through VirtualLock API. As described above locked pages have two major properties – they are not considered for paging by OS and during allocation they acquire both working set and PFN database lock only ones.
The first property has implicit implication on high end hardware such as NUMA. It provides explicit memory residency. Remember that OS commits a page on demand. To allocate physical memory, it will use a node on which a thread touching memory is running. Latter on, the page can be swapped out by OS. Next time it will be brought up into memory, OS will again allocate physical page from the node a thread touching memory is running on. In this case a node could be completely different from original one. Such behavior makes hard for applications to rely on page’s NUMA residency. Locked pages solve this problem by removing themselves from paging altogether. Moreover Windows 2003 SP1 introduced a new API – QueryWorkingSetEx. It allows to query extended information about PTE’s PFN. In order to find out real page residency you should use this API. When pages are locked you need to it only ones. Otherwise you will have to do it periodically since residency of the page can actually change.
The second property – taking both working set and PFN’s database lock only ones enables applications to perform faster and better scalable ramp up.
On NUMA SQL Server’ Buffer Pool marks each allocated page with its node residency. It leverages QueryWorkingSetEx to accomplish it. Once page is allocated it calls the API to find out page residency. It does it only once. Therefore enabling locked pages for SQL server on 64 bit platform will improve SQL Server ramp up time and will improve performance & scalability over longer period of time. When running SQL Server with locked pages enabled you shouldn’t be worried about overall system performance due to memory starvation – SQL Server participates in OS’s paging mechanism by listening on OS’s memory notification API’s and shrinks its working set accordingly.
Let us summarize now – on 64 bit platform, locked pages, awe mechanism, enable better application’s scalability and performance both during ramp up time and over long period of time. However, keep in mind that an application is still required to implement a way of responding to memory pressure to avoid starving the whole system for memory.