Read Alun Jones’ post here.
Vulnerabilities exploited by Stuxnet
Unpatched vulnerability in IE7
If you’re running Win 7, you’re okay. If you’re running IE8 on a previous version of Window, you’re okay.
Otherwise, I recommend using Microsoft’s Fix it link for enabling Data Execution Prevention (DEP) in Internet Explorer (choose the one on the left) until a patch is released.
More info can be found here.
Good resources for understanding Vista app compat
For starters, see Chris Jackson’s blog. A recent post is here. That post is specifically about 64-bit issues, but it also gives a hint as to the general terminology to be aware of when it comes to application compatibility on Windows Vista.
Other resources:
- Application Compatibility Toolkit 5.0 documentation and download
- Application Compatibility and UAC on TechNet
- Windows Vista Application Compatibility on MSDN
MS08-067 and the ripple effect of Windows security bugs
I was surprised to find an email from one of our outsourced service providers in my inbox two days ago, saying that they had to do emergency maintenance on their servers. Specifically, to take them offline and install the patch for MS08-067, a wormable RPC vulnerability in the Windows Server service.
The patch was deemed by Microsoft to be worthy of out-of-band release. Based on what I’ve read about it, I applaud that decision. It’s a severe bug. Waiting until November to publicly release the patch would have been a bad idea.
A certain amount of chaos ensues when such a patch is released. For example, the service I mentioned above was down with relatively short notice – and I’m paying for it regardless. But that outage was handled professionally.
As another example of chaos, this eWeek article includes a suggestion by a security professional that organizations bypass their internal testing process and just deploy the patch immediately to all affected servers. That’s bad advice. After all, the notes accompanying the patch explain how the threat can also be mitigated via a firewall. And if the patch were to cause a compatibility problem, what good is a broken server?
Another example: do a web search on MS08-067 and take a look at some of copies of the original bulletin that appear. Not all of them are complete, and most of them lack links to additional authoritative information. Incomplete, or even innacurate, information moves like wildfire on the internet.
The chaos, as well as the replication of incomplete information, is happening for a reason: lots of companies, and millions of users, are dependent upon Windows in some way. Service providers and news organizations are trying to keep up.
Millions of dollars in commerce, and probably much more than that, is dependent upon Windows. Whether it’s direct access to critical line-of-business applications, something indirect like hoping that your bank’s network doesn’t crash before you cash your paycheck, or even something mundane like checking internet email from home (or blogging; that probably falls into the mundane category as well), most people in industrialized countries are affected by Windows, good or bad.
This is a tremendous amount of responsibility. I used to work at Microsoft and I know what that feels like.
Thus, I think it’s fair to ask what’s being done to prevent problems like MS08-067 from happening in the first place. Frankly, the question didn’t even occur to me until I read this blog post from Michael Howard. It’s an informative post, and I especially recommend reading it if you have a development background.
However, in light of the responsibility, mentioned above, which must be born by Microsoft, as well as the cost paid by the industry in testing and deploying each new patch, the response laid out in Michael’s blog post is inadequate. Microsoft is not doing enough to prevent this problem from recurring.
I’ll summarize a few points made in that post: first, that it’s difficult to design automated tools that can catch the kind of buffer overflow bug that led to this bulletin. It’s not stated whether such tools exist elsewhere, but it is stated that Microsoft’s tools can’t do it. I accept this claim at face value, but there’s more to be said. I’ll come back to this.
Second, the observation is made that security features in Windows Vista and Server 2008 mitigate, although don’t eliminate, the threat. My observation: the patch still needs to installed on those systems. Plus, the majority of the deployed base is predominantly Windows XP SP2 and earlier on the client, and Windows Server 2003 and earlier on the server. So I don’t find the comments to be relevant. While the new security features point to a positive trend from a technology perspective, the blog post doesn’t explain what’s being done to reduce the impact of these bugs, as well as of the patches themselves, on Microsoft’s customers. How is TCO being reduced in this area?
Third, the claim is made that Windows Vista, as well as Microsoft’s Security Development Lifecycle process, came out as winners (I’m paraphrasing). That’s true from a certain perspective. After all, the catastrophe scenario of another widespread internet worm was probably averted. But in light of the observations above, this claim strikes me as insensitive to customer perception.
Finally, the one action item, so to speak, accepted by the blog post on Microsoft’s behalf is to do a better job of fuzz testing (aka fuzzing). Here’s my concern, though: fuzzing is a non-deterministic technique. Is that really the best Microsoft can do?
This brings me back to the first point regarding automation tools. The timing of this patch, coinciding with Microsoft’s earnings announcement, is … awkward. The company netted well over $4 billion this quarter. Think about that, then consider, again, the impact of each security bug and each out-of-band patch on the bottom line of each of Microsoft’s millions of customers, due to downtime, servicing, and testing.
Microsoft must do a better job of reducing TCO. Making a significant, new investment in proactively and deterministically finding and eliminating security bugs should be a key pillar in their strategy for doing so. I can’t and don’t accept that a company with that kind of profits can’t do better than updating their fuzz testing heuristics.
Port-specific connection security rules that require a health certificate
Are you one of those people who, like me, thought that couldn’t be done? Well, read on, because it can!
What am I talking about? I wanted to create an IPsec policy that requiring a health certificate. That is, require that the IPsec peer presents a valid certificate which includes the System Health Authentication OID (used by NAP). Since that capability isn’t supported by the old IP Security Policies snap-in, I needed one of the new Connection Security Rules (that is, the new rule type included in the Vista and Server 2008 firewall).
But I also wanted that rule to be port-specific. While that capability is supported by the legacy IP Security snap-in, it’s not exposed by the Connection Security Rules GUI. Lame.
However, the underlying rules engine supports that combination, and the capabilities are exposed by the netsh.exe command-line. Cutting to the chase, here’s an example:
netsh.exe advfirewall consec add rule name=HRweb-Secure endpoint1=10.0.0.3 endpoint2=10.0.0.2 action=requireinrequireout port1=any port2=8000 protocol=tcp auth1=computercert auth1ca="DC=LOCAL, DC=NORTHWIND, CN=NORTHWIND-NORTHWINDDC-CA" auth1healthcert=yes
In summary, that command creates a new connection security rule with the following characteristics:
- The rule applies to traffic exchanged between two IPs, 10.0.0.3 and 10.0.0.2.
- Authentication is required on inbound and outbound traffic.
- The rule applies to traffic originating from any port, but only when destined for port 8000.
- Finally, both parties must present valid certificates issued by the specified CA, and the certs must contain the health OID.
As an aside, regarding the operating environment, the x.2 machine is a demo web server and x.3 is the client. But keep in mind that IPsec views them as peers.
Important caveat: that rule only gives you integrity, not privacy. That is, the resulting traffic is authenticated and has a cryptographic checksum, but it’s not encrypted. As I said, this is for a web server, and TLS is being used for encryption. Why bother with IPsec? The health OID! By requiring that, I’m ensured that any machine hitting the demo web site has been deemed compliant, based on the current network health policies.
Windows 7 to ship three years after Vista RTM?
I just ran across this CNET interview with Steven Sinofsky, the new head of the Windows division, while researching my previous post.
The three-year delta is interesting for a couple of reasons. First, Sinofsky came from the Office team, which prides itself on a rigid regimen of feature planning and two-year ship cycles. They’ve done a good job.
Thus, I thought the whole point of his new job was to do the exact same thing in Windows. Not that I thought he was going to be able to pull it off, but he was at least going try, by way of draconian measures, to ship Win 7 within two years. At some point in the past 18 months, he apparently gave up.
Second, that they’ve already announced a three-year ship cycle implies that it’s likely to be 30% longer than that. That’s a long time to let their flagship product languish, in the form of Vista, on the market. Again, a smarter strategy would have been to really push for a two year cycle. And then do it again. That way, we’d have had Win 8 by the time we’re actually going to have Win 7.
My guest FCS blog post
Check out my guest post for the Forefront Client Security team on their TechNet blog. I talk about the implementation of the Forefront plug-in for Network Access Protection, how the wire traffic looks, and some security tradeoffs.
New Windows Filtering Platfrom landing page
See this new site – http://www.microsoft.com/whdc/device/network/default.mspx, and specifically the WFP section about half-way down the page. Includes a link to this WFP sample (http://blogs.msdn.com/onoj/archive/2007/05/09/windows-filtering-platform-sample.aspx), produced last year by JW Secure!
When is a credential provider not enough?
Wanted to blog about a recent question relating to custom multi-factor authentication solutions on Vista (and subsequent versions, including Server 2008). The question basically boils down to what you can do by implementing a credential provider (credprov), versus implementing both a credprov and a custom SSPI package.
The interface between credprov and winlogon is such that the latter is expecting to receive SSPI authentication information from the former. In other words, whatever you return has to be digestible by LsaLogonUser (or whatever the latest variant of that function is called).
So in order to add new authentication mechanisms, I’m aware of the following two approaches:
The first is to create a separate credential store consumed by the plug-in credprov. The purpose of the store is to map the proprietary credential into a username/password (and maybe domain name). Thus the user enters a custom cred into the credprov, the credprov verifies it, maps it to a standard Windows password (which, on the plus side, can be long/complex), and returns the password info to winlogon.
The second is to extend SSPI to handle a new credential type. This is the most work, but also the most powerful, since it allows you to actually manage accounts directly from a custom repository (e.g. other than Active Directory or the local SAM).
For example, the biometric solutions currently on the market integrate with AD. They also typically include some sort of provisioning console. So when you enroll a new user via that console, it not only creates the account in the fingerprint database, but also the account in AD with a big random password. Thus, once enrolled in the system, a user can logon to any machine joined to that domain. But they don’t require a custom SSP package (there are exceptions).
If you choose the more complex route, note that the association between the user credential, the credprov, and the SSP package starts with whichever credential tile the user chooses. If they chose one labeled “Dan’s Custom Provider,” the data gets routed to my credprov. My credprov knows the credential format recognized by my SSP. The rest is handled by the Negotiate provider in SSP; it queries the other providers until it finds one that recognizes that cred blob.
Smart card logon in Vista is a “partial” example of the above, since there is a smart card credential provider plug-in that ships with Vista. However, smart card logon requests are handled by the Kerberos SSP.