The case of the failing signed driver install

February 11, 2010

I was asked recently to look at a couple of support cases that had been logged where installations of our Application Manager and Performance Manager products were failing. The logs from the failed installations, obtained from invoking msiexec with the /l*vx syntax, gave the following error:

(Error code 0x800B0109: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.)

A web search for the error gave many matches which didn’t really help so I then tried to reproduce the error in a Windows Server 2003 x86 virtual machine but the installation worked fine, as it usually does. Analysis of the msiexec log from the failing system indicated that the error was occurring when installing our signed device drivers. So next I ran the great Process Monitor tool from SysInternals, now Microsoft, to try and understand what was happening, file system and registry wise, during the installation, particularly around the area where the msiexec process installs the device drivers.

What this showed me was immediately before our driver catalog (.cat) file was read, the “State” registry value in the following key was being read:

HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing

Given the error text from the failed installation, this looked relevant. A quick web search threw up a number of interesting articles, namely:

http://msdn.microsoft.com/en-us/library/aa388201(VS.85).aspx

and

http://blogs.msdn.com/spatdsg/archive/2006/06/05/618082.aspx

which led me to try changing the “state” value in the registry in my test VM from 0x23c00 to 0x40000  (WTPF_ALLOWONLYPERTRUST as per the MSDN link above and the wintrust.h header file so effectively much more restrictive than what was in this value by default).

Retrying the previously successful installation in my test VM then gave exactly the same error that our customers had been experiencing. On passing this information on, both customers confirmed that their “state” registry values were either not as per the default or were missing, due to the parent key being absent, and that setting the “state” value to the default allowed the drivers to be successfully installed.

Case(s) solved! But this leaves me with the desire to know what caused this to happen, particularly as we have had two cases from different customers logged so closely together, given that I stopped believing in coincidences many years ago. This is the main reason for me blogging about this issue – I hope that by the power of search engine indexing that if others suffer this issue then they will be brought here and their problem solved.

Guy Leech

10th Feb 2010




What’s the risk in your Desktop Strategy?

September 24, 2009

As we progress with our desktop strategies it is becoming clear that there are common themes that are competing with each other for priority. This depends on what is most important to your strategy will ultimately decide on what type of desktop delivery stack you will ultimately choose. There is no wrong or right answer, regardless of what the one-hit wonder desktop virtualization vendors may say. Their opinion will always lead you to their vision of desktop virtualization regardless of what is truly important for your organization. The answers that will lead you on your right path will be a trade-off and prioritization of;

  • Lower TCO: Lower hardware & management costs and a standardized desktop
  • Higher Productivity: High availability, high performance, low maintenance
  • Security: Greater control of data, configuration and malware
  • User Acceptance: Use case driven for flexibility, productivity and unique desktop experience
  • Risk: Trade off the risk of an unproven delivery mechanism against perceived benefits

When organizations are considering user acceptance they will ultimately consider it as the trade-off against lower TCO and security. Because the new desktop virtualization vendors are also marketing their products with the user in mind, they are offering some degree of user personalization with the promise of lower TCO and a more secure computing environment. However, following this seemingly alluring path is also frought with risk. The term “one-hit wonder” refers to the fact that these vendors are providing technology whereby they provide the whole desktop virtualization delivery mechanism. They may only do one thing well but you have to take the whole stack with little option to swap components out. This makes these vendors a single point of failure in your whole desktop virtualization stack. Also, new and unproven technology usually requires significantly more support. Enterprises with thousands of users require a lot of support regardless of the maturity of technology. Start-up vendors are unlikely to have a support organization that won’t strain under this type of pressure. This risk will surely restrict the roll out to only where that benefit is seen as being absolutely essential.  This will not lower TCO, as this will only really be realized with a homogoneous desktop delivery mechanism, not a heterogenous one where each use case has a completely different desktop delivery stack.

The key is to create a desktop delivery mechanism that suits all of your use cases and achieves the Lower TCO, gains the higher productivity and ensures security. Risk can be handled by creating a desktop delivery mechanism based on mature technology, proven enterprise level vendors and best of breed solutions that have been designed to work with multiple desktop delivery technologies. But to achieve this and satisfy all of the use cases will require a way to ensure user acceptance by task worker, knowledge worker and mobile worker alike.

The answer is strickingly simple, thankfully.

Choose the desktop delivery mechanism that suits your priorities and avoid trade-off for user acceptance by using a best of breed User Environment Management solution that can work with both your existing desktop delivery mechanism and your planned desktop strategy regardless of whether it is homogenous or heterogenous, physical or virtual.

AppSense User Environment Management products that provide the ability for users to create their own unique and productive desktop experience with personalizations, user data and user-installed applications are a perfect example of how user acceptance can be achieved over any desktop delivery mechanism. More to the point, they key to avoiding the trade-off is by providing a solution that helps IT manage the user personality. This is why granular policy management is so important. With AppSense, IT make the decisions as to the users entitlement to personalize and roam without fear of loss of data, applications and personalizations. AppSense also seamlessly automates the usually painstaking aspects of migrating a users unique desktop experience through a windows upgrade. A single best of breed User Environment Managemet solution, a single user personality, any desktop delivery mechanism. IT in control of it all. It couldn’t be simpler.



How To Guide: Streaming Microsoft Office with Citrix XenApp 5 – Best Practice Guide & Licensing Overview

August 27, 2009

Citrix Technology Professional (CTP) Alexander Ervik Johnsen has written a very useful piece on how to Profile and Stream Microsoft Office 2007 using Citrix XenApp 5.0

This is a great guide and covers how to stream Office to a desktop, or, into a Citrix XenDesktop session.  His article and guide can be found on his website here.

Further to the actual process of profiling and streaming the Office application, I also want to ensure everyone is aware of the Microsoft Per Device Licensing Model for Server Hosted Applications.

Many Microsoft applications, including Microsoft Office™, Project™ and Visio™, are licensed on a per-device basis. This means a desktop application license is required for each and every device that is able to potentially access the application or server where the application is installed, regardless of whether a user executes and runs the application of not.  This makes licensing Microsoft applications in virtual environments a tricky, potentially very costly, and misunderstood subject. 

One misconception is that by ‘publishing’ or ’streaming’ applications to a limited “user” group, that group is compliant with the Microsoft license agreement – in other words, Microsoft licenses their applications per user.  This is in fact in breach of the Microsoft licensing model, and can lead to legal action.

I have written a blog, which also includes official Microsoft approved whitepapers on how to control and enforce application access and license compliance on a per device basis  in such virtual environments,  that blog can be found here

In addition to helping ensure compliance, effective license control and management can also reduce Microsoft License requirements and associated costs – more information on this can be found here.

If anyone has any questions or comments, as always, please do let me know.

Thanks
Gareth


NEW FEATURE No. 3 – AppSense Environment Manager 8.0 Service Pack 2 – Improved compression and data handling protocol

August 26, 2009

This is the third installment in a series of posts about the new features and options in AppSense Version 8 Service Pack 2.  (If you have not yet downloaded this latest release, you can read more info and download it from here )

AppSense Environment Manager 8.0 Service Pack 2 introduces a new protocol for transferring data between the endpoint device and the server database which holds all the user personalization settings.

The change means that the Personalization Server now benefits as it has to do a lot less processing in order to insert or extract the required data from the database, and can therefore support a lot more users and even faster response times.

Part of this change is to store the user’s personalization data in a compressed format in the database, which means the required database footprint is a lot smaller (in some cases by a factor of 10).

Internal performance tests yielded the following results:

  • 87.5% increase in performance scalability between version 8.0 and 8.0 SP2.
  • 45.0% increase in performance scalability between version 8.0 SP1 and 8.0 SP2.

Note: On upgrade to Service Pack 2, User Personalization data is in the old protocol format. This data is upgraded to the new format, in the database, on demand as applications are used and such, will incur a small performance hit on first launch. However, once all endpoints are upgraded to Service Pack 2 and all data in the database has been upgraded, the performance of User Personalization will be much higher than previous releases and scalability will be dramatically improved.

As always, if you have any questions or require any further information, please do get in touch.

P:S
As this is an ever growing blog topic, the previous posts on the other new features we have detailed can be found below:

NEW FEATURE No. 1 – AppSense Environment Manager 8.0 Service Pack 2 – Run As

NEW FEATURE No. 2 – AppSense Environment Manager 8.0 Service Pack 2 – Connect As

NEW FEATURE No. 3 – AppSense Environment Manager 8.0 Service Pack 2 – Improved compression and data handling protocol

NEW FEATURE No. 4 – AppSense Environment Manager 8.0 Service Pack 2 – Manipulation of files in Personalization Analysis

NEW FEATURE No. 5 – AppSense Environment Manager 8.0 Service Pack 2 – Run Once

NEW FEATURE No. 6 – AppSense Environment Manager 8.0 Service Pack 2 – Group SID Refresh

NEW FEATURE No. 7 – AppSense Environment Manager 8.0 Service Pack 2 – Trigger Action Time Audit Event

NEW FEATURE No. 8 – AppSense Environment Manager 8.0 Service Pack 2 – Stop If Fails

NEW FEATURE No. 9 – AppSense Environment Manager 8.0 Service Pack 2 – New Application Categories in the User Interface

NEW FEATURE No. 10 – AppSense Environment Manager 8.0 Service Pack 2 – Refresh

NEW FEATURE No. 11 – AppSense Environment Manager 8.0 Service Pack 2 – Registry Hive Exclusions


My login’s too cold – it’s not all about TS and VDI

August 17, 2009

Goldilocks was hard at work managing “Three Bears Industries”. She added a default printer here, mapped a network drive there, and sorted out a few group policy settings. All in a days’ work for the over worked, under paid IT administrator.

She heard a noise a the front door – “The Bears are back!!!” she exclamed, and slipped quietly out through the back door and on to her next client.

“My Login makes me tooo Hot – Hot and bothered from waiting!!!!! ” yelled Papa Bear. “My Login makes me toooo cold – I feel like hibernating ! ” grumbled Mama Bear.

Baby Bear looked at his parents with big blue eyes and said “My Login just sucks !!”. You gotta love kids, they always say what they feel. But then, that’s the harsh reality in thousands of organizations – Logins suck!!

“Three Bears Industries” needs AppSense.

“But isn’t AppSense only useful in those environments? Why do I need AppSense if I’m a fat client site?”.  Think about it, If AppSense provides value in VDI and TS, then why would it NOT provide value in a real physical desktop?

In two weeks time, I start a rollout at a site who saw value in AppSense at the desktop level – around 3,000 of them to be precise.  AppSense has hundreds of desktop sites around the world – managing profiles, security and performance with our software.

These guys went through our ROI process a couple of months back. We found they were losing around 80 man hours per DAY while users sat around waiting to login – thats 10 people every day they were paying for nothing. We also found the Helpdesk staff were spending around 300 hours per month fixing profile issues.

I introduced them to ENVIROMAN – looking very Borat like in his bright green Budgy Smugglers. He showed them a couple of quick demos, rollback of personalization settings, streamed application settings from desktop to desktop, and the rest is history – Thank You ENVIROMAN – your subscription to “Geek Monthly” is in the mail :-)

But seriously Guys, checkout our value on the desktop – your wallet will thank you.


Fair-Sharing CPU usage on Citrix XenApp to ensure Quality of Service and Faster Response Times (With AppSense Performance Manager)

July 23, 2009

The Server Based Computing (SBC) such as Microsoft Terminal Server and Citrix XenApp model offers many unique challenges for both architects and administrators. There are concerns of security, availability of resources, performance and the costs of hardware, licensing and ongoing management. Due primarily to opportunities for cost reduction in hardware purchases however, server consolidation through ‘optimizing performance’ has been the main area addressed. Fewer servers also result in lower licensing costs, lower maintenance overhead, and reduced electricity and cooling costs. However, it has now become apparent the real issue with performance is not just financial, but one of user experience. There is an ongoing tradeoff betweenensuring users receive a consistent ‘end-user experience’ while maintaining the minimum amount of hardware. 

 

Before considering CPU usage, management and optimization, it is first necessary to clarify CPU usage. When a figure such as 60% CPU usage is quoted, what is actually meant is the CPU is being utilized at 100% for 60% of the time. This shows a high CPU value is actually an efficient use of resource, rather than a problem.

 

A CPU utilization of 100%, while considered a problem by many, actually means maximum use is being made of this resource, and therefore achieving maximum return on investment. Problems occur when requests exceed 100% CPU utilization, whereby resource contention and bottlenecks are formed – although, this can be solved by efficiently allocating the resource between the users and running applications.

AppSense Performance Manager is able to control CPU usage in many ways, and may be used to not only resolve CPU usage issues, but also to ensure CPU resource to mission critical applications and users.  In this post I want to cover one specific feature function within AppSense Performance Manager – Smart Scheduling, or sometimes known as CPU Fair Sharing.

Smart Scheduling
With Microsoft operating systems, during process initialization, a priority is assigned for the process to run under. Microsoft Windows 2003 has the following priorities:-
Realtime, High, Above Normal, Normal, Below Normal, Low.

A process which has a higher priority will be allocated CPU prior to a process with a lower priority. Most applications when launched are given a ‘normal’ priority. This means that these processes form a ‘queue’ for the CPU and will receive CPU time only when it is their turn. There are a few standard system processes which are assigned a higher priority, such as the ‘Windows Task Manager’ which is automatically assigned a ‘high’ priority. This ensures should the Task Manager process require CPU time it will be given it immediately.

A process may consume a maximum amount of CPU time, known as the quantum time. If the process does not require the full quantum time, for example if it requires further data, it is able to release the process prior to completion of the quantum time, thereby allowing another process access to CPU resource.

When an application Launches, there are many calls to the disk to access new data, and therefore the process consumes small parts of the quantum time. Processes which involve a large number of calculations, such as highly mathematical applications, tend to use all of the CPU quantum time and are therefore CPU intensive.  If the timeline of processes within the CPU are mapped out the problems become immediately apparent.

For example – Below, Process A is Microsoft Word launching; Process B is a resource intensive Microsoft Excel macro.

 

Blog1

Scenario 1:- Only Process A is running in this scenario, the application performs at full speed as it gets CPU resource when required and disk resource when required.

Blog2

Scenario 2:- Process A and Process B running. In this scenario it is evident that process A has been dramatically slowed down as there is a length of time where it is waiting for process B to relinquish the CPU. This can have dramatic effects on the responsiveness of the process, and obviously increases the time to wait. As process A and B both have equal priorities one process has to wait for the other to finish.

AppSense Performance Manager includes ‘smart scheduling’ technology that is able to share the CPU resource more efficiently between running applications. The priorities of the process are altered dynamically. The result being that processes requiring a small amount of CPU time tend to be given a higher priority than those which tend to monopolize the CPU.

Blog3

In this way the CPU time is divided equally amongst both processes ensuring that each has a share and no one process is hogging the CPU, causing others to wait. Process B receives much smaller chunks of CPU time but similarly its waiting time is also short, ensuring that the application appears responsive to the user, and therefore still falls within its time to wait. The effect of this process is greatly reduces the CPU queue length.

By interpolating between standard priorities, and dynamically adjusting the priority of each process, AppSense Performance Manager is able to ensure more efficient processor usage. More importantly, from a user perspective, no application is seen to “freeze”, ensuring application responsiveness falls within the acceptable ‘time to wait’ period.

Share Factors
In the above examples we assumed both process A and B are equally important and therefore require an equal use of resources. In most cases this is not a true representation of applications and users. Servers contain both mission critical applications and also users with varying degrees of ‘importance’. By implementing share factors within AppSense Performance Manager, these applications and/or users may be given a higher or lower share of CPU time.

If the process A is a mission critical process then it may be assigned a higher share factor. The effect of this is to raise the priority of the process so that it may have a longer CPU time before process A is given a higher priority.

Blog4

Here process B has been given a higher share factor resulting in process A waiting on process B. The difference now is the time process A has to wait is a value which is configurable by the Administrator. A further function of AppSense Performance Manager is the ability to apply application or system state control. An application may be defined as having a high share factor when in the foreground, a medium share factor when in the background, and a low share factor when minimized.

If we now look at things from a user point of view, the application they are currently working on will always be guaranteed to receive CPU time, and can therefore always be made to function within its required time to wait, hence “performing well”. Other processes continue to receive a share of CPU time ensuring that they also function. One key system state for SBC systems is the ‘disconnected’ state.

By assigning a relatively low priority to all other states we can ensure that users who disconnect from the server without logging out properly, do not continue to consume valuable CPU resource which is then made available to all other users.

In conclusion, Microsoft Windows operating systems go some way to addressing how applications make use of system resources, but it is clear that in situations of high resource usage, such as in SBC environments, they are challenged. AppSense Performance Manager addresses these shortcomings. It provides many methods of controlling critical system resources such as CPU, Memory (both physical and virtual) and Disk. It’s CPU scheduling algorithm ensures that even in times of maximum CPU usage the server remains responsive for each user and application. If necessary, weighting may be given to ensure minimum response times for mission critical applications and/ or users.


AppSense Technical University Training For Partners

July 22, 2009

I am excited about writing this one, the much awaited 2009 AppSense Technical University is soon upon us! It will take place in October and November!!  Following on from our previous events, there are some exciting new developments at AppSense that we would like to share with you; amongst other topics:

  • User Introduced Applications (UIA) Technology – do we need, and how do we enable, users to install applications into non-persistent VDI sessions, and have the applications (and settings and preferences) remain available in the next non persistent vdi session?!
  • AppSense Management Suite Version 8.1 Product RoadMap
  • ‘Policy & Personalization’ best practices across virtual and multi OS platform environments

Uni

 

Why attend the AppSense Technical University?

The AppSense University is a ‘free of charge’ event to our AppSense Certified Solution Partners, and is a great chance to meet up with the AppSense Technical teams, as well as your peers from within the community. As a valued member of our Certified Solutions Partner program, you are invited to this comprehensive technical update and networking event.

The 2 day event will include in-depth, hands on training designed to enable you to provide consultancy services and implement the AppSense Management Suite for prospects and customers.

Register for further information

As always, AppSense is hosting several Technical University events in locations around the globe. If you are interested in attending an AppSense Technical University, click on the country or region most relevant to you and we will keep you informed of the event details:

United States, November 2009 

United Kingdom, October 2009

Norway, November 2009

DACH Region, November 2009

BeNeLux, November 2009

Australia, October/November 2009

We look forward to seeing you there!

Best Regards,

The AppSense Technical University Team.

Website: http://www.appsense.com
Email: university@appsense.com
Telephone: +44 (0)1928 793 444