Video interview can be found here.
Checkout the download point here (it’s hosted on codeplex).
The Blueprints Manager is an extension to Visual Studio that allows you to embed guidance into a kind of project template (called a Blueprint).
For example, suppose your organization builds a lot of web services and you have standard best practices and requirements for how that’s done. Some of the requirements can be delivered as boilerplate code, some are comments and documentation, and some of it comes in the form of references to other templates (e.g. such as an installer). The Blueprint payload can include all of those things, and Blueprints are composable.
One complaint – there’s nothing listed under Source Code. I predict that Microsoft will see greater usage of tools like this when they provide the code!
Here’s an interesting service I just learned about in the PDC 2008 expo: Microsoft offers a separate tier of services and support to ISVs. By separate, I mean in addition to what you get as a Gold Partner, but not as expensive or long-term as a Microsoft Consulting Services engagement.
I was surprised to find an email from one of our outsourced service providers in my inbox two days ago, saying that they had to do emergency maintenance on their servers. Specifically, to take them offline and install the patch for MS08-067, a wormable RPC vulnerability in the Windows Server service.
The patch was deemed by Microsoft to be worthy of out-of-band release. Based on what I’ve read about it, I applaud that decision. It’s a severe bug. Waiting until November to publicly release the patch would have been a bad idea.
A certain amount of chaos ensues when such a patch is released. For example, the service I mentioned above was down with relatively short notice – and I’m paying for it regardless. But that outage was handled professionally.
As another example of chaos, this eWeek article includes a suggestion by a security professional that organizations bypass their internal testing process and just deploy the patch immediately to all affected servers. That’s bad advice. After all, the notes accompanying the patch explain how the threat can also be mitigated via a firewall. And if the patch were to cause a compatibility problem, what good is a broken server?
Another example: do a web search on MS08-067 and take a look at some of copies of the original bulletin that appear. Not all of them are complete, and most of them lack links to additional authoritative information. Incomplete, or even innacurate, information moves like wildfire on the internet.
The chaos, as well as the replication of incomplete information, is happening for a reason: lots of companies, and millions of users, are dependent upon Windows in some way. Service providers and news organizations are trying to keep up.
Millions of dollars in commerce, and probably much more than that, is dependent upon Windows. Whether it’s direct access to critical line-of-business applications, something indirect like hoping that your bank’s network doesn’t crash before you cash your paycheck, or even something mundane like checking internet email from home (or blogging; that probably falls into the mundane category as well), most people in industrialized countries are affected by Windows, good or bad.
This is a tremendous amount of responsibility. I used to work at Microsoft and I know what that feels like.
Thus, I think it’s fair to ask what’s being done to prevent problems like MS08-067 from happening in the first place. Frankly, the question didn’t even occur to me until I read this blog post from Michael Howard. It’s an informative post, and I especially recommend reading it if you have a development background.
However, in light of the responsibility, mentioned above, which must be born by Microsoft, as well as the cost paid by the industry in testing and deploying each new patch, the response laid out in Michael’s blog post is inadequate. Microsoft is not doing enough to prevent this problem from recurring.
I’ll summarize a few points made in that post: first, that it’s difficult to design automated tools that can catch the kind of buffer overflow bug that led to this bulletin. It’s not stated whether such tools exist elsewhere, but it is stated that Microsoft’s tools can’t do it. I accept this claim at face value, but there’s more to be said. I’ll come back to this.
Second, the observation is made that security features in Windows Vista and Server 2008 mitigate, although don’t eliminate, the threat. My observation: the patch still needs to installed on those systems. Plus, the majority of the deployed base is predominantly Windows XP SP2 and earlier on the client, and Windows Server 2003 and earlier on the server. So I don’t find the comments to be relevant. While the new security features point to a positive trend from a technology perspective, the blog post doesn’t explain what’s being done to reduce the impact of these bugs, as well as of the patches themselves, on Microsoft’s customers. How is TCO being reduced in this area?
Third, the claim is made that Windows Vista, as well as Microsoft’s Security Development Lifecycle process, came out as winners (I’m paraphrasing). That’s true from a certain perspective. After all, the catastrophe scenario of another widespread internet worm was probably averted. But in light of the observations above, this claim strikes me as insensitive to customer perception.
Finally, the one action item, so to speak, accepted by the blog post on Microsoft’s behalf is to do a better job of fuzz testing (aka fuzzing). Here’s my concern, though: fuzzing is a non-deterministic technique. Is that really the best Microsoft can do?
This brings me back to the first point regarding automation tools. The timing of this patch, coinciding with Microsoft’s earnings announcement, is … awkward. The company netted well over $4 billion this quarter. Think about that, then consider, again, the impact of each security bug and each out-of-band patch on the bottom line of each of Microsoft’s millions of customers, due to downtime, servicing, and testing.
Microsoft must do a better job of reducing TCO. Making a significant, new investment in proactively and deterministically finding and eliminating security bugs should be a key pillar in their strategy for doing so. I can’t and don’t accept that a company with that kind of profits can’t do better than updating their fuzz testing heuristics.
I’m looking forward to attending Microsoft’s Professional Developers Conference 2008, starting Oct 27. It’ll be my first PDC, actually.
One interesting observation: the agenda shows that Cloud Services has the largest number of sessions (39) by topic. Even more than Windows 7 (22), although that’s the runner-up!
What does that say about what Microsoft thinks is the future of computing, its areas of future revenue growth, and its future investments in technology innovation?
Obviously, cloud computing has got major buzz. And it is a compelling computing model. But don’t forget that the interesting problems are always in integration. Developers will be asking about cloud computing at PDC, and how to take advantage of that new technology, but what they really need to know is how to integrate with what they’ve already got. Namely, a bunch of Windows clients and some servers!
Here’s a subject near and dear to my heart: a whitepaper from Microsoft (a JW Secure customer) discussing how Blue Ridge Networks (a JW Secure customer) used several of the Solution Accelerators kits, including the Forefront Integration Kit for Network Access Protection (a JW Secure project).
It’s fairly old news now that Kevin Johnson, the former President of the Microsoft platforms division – basically, Jim Allchin’s replacement – left MIcrosoft to be CEO of Juniper Networks. This post was one my favorites on the subject. I also like the comment I read somewhere that, compared to Johnson’s old job, Juniper might as well be Jupiter. I happen to disagree with that, but it’s clever.
Early in his tenure as President at Microsoft, Johnson made a comment that really struck me. A concern shared among a lot of conscientious folks on the Windows engineering team, myself included, was that the team was too big for its own good. This was before codename-Longhorn became Vista and that whole mess, a devolution that proved our concern valid like nothing else could have.
Anyway, at the time, Johnson said something to the effect that he agreed, and that he felt it would be wise to reassign engineers to other projects. It was a candid comment for someone at his level. An over-simplification of a complex problem, of course. Nevertheless, I remember thinking to myself, “Wow, this guy really knows what’s going on in the trenches.”
In retrospect, the Windows team, like most of those behind products that are successful in the market, seems to only grow. And the other problem – the Windows release cadence – doesn’t seem to be improving either.
The impression is therefore that KJ made an initial splash, but not much after that. He did well enough to earn a mid-sized CEO gig: probably less than he was dreaming about, but still not bad.
One more thing: some commenters have suggested that he left because the failed Yahoo bid meant he wasn’t getting Ballmer’s job. The situation is more complex than that.
On one hand, much to the chagrin of Microsoft’s shareholders, Ballmer wasn’t fired by the board after just a year or two. It was thus obvious that his successor, whoever it was going to be, was going to have a long wait. Performance apparently isn’t a factor in keeping the top job.
On the other hand, at this point, Ballmer’s open-ended stay is also secured by the fact that there aren’t stronger candidates waiting in the wings. If it was going to be KJ, too much of his reputation was staked on Yahoo. And the other potential future CEO, Eric Rudder, isn’t prepared yet.
There’s a seven part series from the BBC on YouTube, filmed for Gates’ last full-time week at Microsoft (having ended two days ago). They captured some classic Bill-isms while filming two executive reviews, one from this year and one from 10 years ago.
“You guys never understood. You never understood the first thing about this.”
“It’s a Turing machine and this is the syntax, right?” [As in, give me the bottom line …]
Gotta love it!
Just read this Steven Levy article about Bill Gates last week as a Microsoft ’employee’. Pretty good – Levy is an undeniable Microsoft expert, having studied and written about Microsoft and the software industry for many years.
Interestingly, Levy referenced this blog post from last year: Microsoft is Dead, by Paul Graham. I suppose that Levy’s actual intent was to reference the sentiment expressed by Graham’s essay, rather than the factuality of it, since it’s lacking in the latter.
In any case, I do agree that Microsoft is facing big challenges. Their two main problems are, one, that they aren’t getting the best people (Google is – see below), and two (Graham hit the nail on the head here), that they’re pursuing the wrong innovation strategy.
Regarding the latter, innovative software doesn’t come from companies that are already big. But the solution isn’t to buy Yahoo; buy all of the possible next Googles, leave them alone until there are clear winners and losers, and then harvest.
Some comments follow on more of the specifics raised in the article.
Regarding the granddaddy cash cow: Windows. It’s important to remember that Microsoft still has a monopoly, and that its software runs almost every client computer. The real issue is that each Windows release competes with the previous one, and that from that perspective, Vista is getting killed by XP.
It surprises me that so many people – including those in the product group at Microsoft – see Apple/OSX as the biggest competitor to Windows. As I already stated, Windows is the biggest competitor to Windows. Given that so much of the demand for Windows, as well as the growth partner ecosystem which gives the platform its value, has been driven by enterprise customers, why would you model future versions on a niche consumer product?
Finally, back to Google. I agree that Google is perceived as the overall technology leader in the industry right now, and for many of the right reasons, the most important of which is that, again, they’re hiring the best people.
But while Google owns search, and Microsoft’s attacks on that space have been unsuccessful, it’s also true that Google’s attacks on other areas (e.g. productivity applications) have been unsuccessful. What will be interesting is who wins the next frontier – cloud computing – and how. Yes, Google has a head start, and yes again, they’re hiring the best people, but the battle hasn’t been fought yet. Microsoft has consolidated many of its best people around productizing its cloud computing strategy, and its greatest asset in the battle may be the one that got the company started in the first place: developer tools.
I had only spent about 20 seconds catching up on news this morning when I stumbled onto this one. I have to admit – the proposition made me stop and think for a second. Was the creation of Internet Explorer Microsoft’s biggest mistake ever?
Another way to look at the question is this – would Microsoft be worse off if they’d never developed an in-house browser technology? Probably. And the historic turn-on-a-dime company-wide shift in direction that occurred when Bill Gates finally realized he’d missed the Internet train? I don’t see how that could have proceeded without a browser development effort, at least symbolically. And engineering teams across the company will continue to benefit from the experience (good and bad) of that effort for years to come.
Plus, the highest profile Windows platform security bugs have come from LSA/RPC, SQL, and IIS. Compared to those whoppers, IE’s security flaws look minor.
That doesn’t disprove the proposition, of course; it’s just another way to look at it. Still, the question strikes me as a fallacy. Sort of like saying, "Did General Motor’s make its biggest mistake in offering its workforce health insurance and a pension?"