Wednesday, September 10, 2008

Finding People


So, not wanting to be the last to notice this but, it seems more and more that KM 101 is now being applied to people, not just to people in the enterprise (that was KM 102) but to everyday users of the Internet, to you and me..?

This started a while back with online dating sites that matched seekers by basic attributes in the context of finding companionship. More recently finding people through the things they share with you has become the norm on Amazon etc where you can see people who bought what you bought and then see what else they like. Pandora.com, the music site makes use of this along with the very inventive (and uncannily accurate) music genome algorithm that finds music similar to the music you like.

Facebook and LinkedIn, Spock and Ecadamy followed and these are very much more wide ranging (at least Facebook is) in the context in which they operate and while the last three purport to be focused on managing your professional network, for most of us there is a good deal of overlap between professional networks and personal.

Getting a little more specialized here, I recently had some fascinating discussions with a few of the managers at Gerson_Lehrman Group  whose flourishing business is built around matching your expertise needs to experts with those skills in thousands of different professions. To do this they use a very complex and wide ranging Taxonomy to describe people. 

This struck me as a very interesting development as it opens the door to building virtual teams of people for one-off projects. This is something any small consultancy or indeed, more recently, larger business, without the desire to keep experts on staff full time, needs constantly and can make good use of.

The most recent development is a service like Peoplejar which is simply about helping you find people. No predefined context at all but all sorts of ways for you to create custom context as you go. Sort of like a phone book with lots of other attributes about the people listed.

This looks like a very novel and interesting way to push the envelope a little further and I will be spending a little time learning how it works and what lessons we can learn from it. 

It almost seems like KM 101 is spilling out of the office and onto the street and who knows where it will lead us.



Friday, September 5, 2008

Its the browser, stupid.

Scanning the news this week covering the just-announced beta of ‘Chrome’ the Google empire’s open source browser, I was waiting for the other shoe to drop....it did not. Where was the big bang news story about what this will mean for the future of computing..?

There are two competing models for computing out there right now. One, the model we have grown used to is all about expensive PCs for the home, loaded with expensive and largely unused products and prone to regular data loss due to crashes, virus's, security holes etc. This is the Microsoft world and it is predominant..currently.

The other model, still to be fully defined, is based on the premise that the Internet supplies the functionality we want as we need it in the form of 'services' delivered on demand from the 'cloud' which is the Internet. Additionally it stipulates that storage, search and backup are plumbing you need not worry about as your data is also resident out in the cloud somewhere along with all these services, always secure and available.

So far Google is the vendor that has single-handedly done most to push the cloud model and is the biggest threat to the Microsoft computing hegemony. From the introduction of Google desktop search, Gears, and the Google Docs portable working environment, Google has been steadily assembling all the components for a complete cloud-based alternative to the current PC desktop.

Nowhere that I could find was there much more than a fleeting mention of the deeper implications of how Chrome will fit into all of this. There was of course (and rightly so) a lot of talk about what Google might do with a much more detailed click-stream covering what users are actually looking at now coming out of the new Chrome browser but, what about all the other stuff? Nada!

So I have installed the Beta and, it is pretty darn sweet with a minimalist look and all sorts of smart, smart features built right in, nothing less than we would expect from Google of course.

If you want to know more out Chrome, the first thing you should do is read the excellent comic book (yes a comic book) that Google has created to showcase the project and the product. Most of what follows is based on the revelations therein.

Given how key the browser is, lets remember that the browser was the thing that landed Microsoft in such hot water here in the US and serious trouble in Europe because Redmond thought the browser held the keys to the future and so sought to integrate it very tightly with the Microsoft desktop while stuffing it full of compelling new features and making it very hard for any one else to get in on the action.

Nice try guys but you have the picture the wrong way around.

The way forward (in my humble opinion) is not by tying the browser to the desktop but using it as a gateway for everything beyond the desktop that the user is reaching out to. The browser must work seamlessly with all those services out there on the web, not be a barrier to using them.

The steps taken by Google with Chrome seem to me to be the first ones set, with the heft of the Google empire behind them, and likely to lead forward rather than into some small and limited backwater as we have seen with others in this space.

Lets discuss a few of the more substantial elements of the new browser’s design that articulate fairly clearly what Google is thinking and did not seem to be well appreciated in the press.

Chrome, in a browser first, is not only multi-threaded but actually has its own process manager. Just like a real Operating System (OS). This implies a very substantial plumbing investment on Google’s account, much of it an investment whose value will only start to be realized down the road.

There are some immediate benefits in this regard however in the increased stability and partitioning ability of the browser so that bad things can be quickly identified on a tab and killed before they bring down all browser sessions. Down the road, this underlying architecture will be crucial to running large, complex applications, many at a time.

It should come as no surprise that the new browser shares an underlying component for page rendering with the Google phone OS ("Android") and this means that pages should look roughly the same in both places and certainly should function the same. In this way, pages optimized for Chrome will also run very well on Android, a nice benefit and one likely to increase acceptance of both quickly.

Google’s developers make no bones about the fact that while JavaScript is now everywhere on the web, it is a really poorly designed scripting language and needs a lot of help to do what we need it do today. Guess who is giving it a hand? With a new Just-In-Time compiler and new memory management features in Chrome, Google has actually created a virtual machine for JavaScript, an industry first and a great way to get a handle on this ungainly language. The benefits to you..? Faster page loads and fewer crashes.

On the security side, the developers and engineers have redesigned the standard security model for the Internet and used it as an opportunity to throw a few brickbats at Redmond along the way. The areas considered and carefully tightened up cover Malware and Phishing exploits, security holes that today enable some of the worst scams on the web. While acknowledging the inability to also cover up plug-ins with problems, they are at least able to cover most anything that comes through the usual channels.

The other major implication is that Google Gears (the nascent web API standards) are built right into Chrome allowing others to integrate new features and new applications right into and through this browser. That the whole thing is open source is of course to be expected, what better way to start a stampede than by giving away free money!

Google would like to move things off the desktop and into the cloud. We see this in everything they do and say, yet up until now the gateway to the cloud was controlled by a vendor with an entirely different agenda; hence Google had a serious problem in moving forward.

With Chrome, the gateway is now in their hands and, just as importantly, as an open source product, Chrome is also in the hands of a huge community easily capable of overwhelming the Redmond developers with innovations and smart new things to deliver cloud computing right into our daily lives.

Welcome to the new beginning.


Monday, August 4, 2008

More on Agile

We are winding up this project which has been run as an Agile/Scrum endeavor and I thought it might make some sense to comment given my previous post on this methodology.

So far the results seem to be better than I anticipated though the final run of testing will validate that one way or the other. The team moved in a fairly regular way through six two-week sprints, each starting with a sprint planning session and ending with a sprint review.

Each morning begins with a 10am update call and all progress and 'impediments' to progress are covered. These calls are usually twenty minutes, sometimes less. They are not about solving problems but all about keeping everyone connected and up to date. This is a very good thing.

The sprints have stories and each story has points. We have the team assign points during the planning session and this gives us a sense of the time it will take to complete the stories.

However, an incomplete story from the prior sprint will have all it's points rolled forward into the total for the next sprint so tracking a good points/sprint number or 'velocity' is a little bit of an art.

What has worked well are the calls, the constant reprioritizations and the planning/review sessions. Formalizing these as Agile does, is key to good management of a project.

However, it should be noted that Agile is really best suited to a project and team that want to get started immediately and have not done much analysis. There is probably a costs saving as the analysis more easily falls out of each sprint and as the team get up the curve the estimates get better.

I recently found this set of principles which seemed well thought out and, while somewhat obvious, needful of being written down.

In summary the methodology seemed to work pretty well and the refinement opportunities presented by the continuous reviewing and reprioritization work as a good way to keep the project lean and on target.

Saturday, July 26, 2008

KM on the WWW

Sometimes we look up and see there are patterns we recognize in the clouds. The web, once a truly disorganized and obscure cloud of content exhibiting varying degrees of useful/uselessness has slowly begun to change shape and become structured. Web 2.0 has given rise to a dimension long missing and sorely needed, the usefulness of web content.
However there are limitations inherent in the tagging of content as the users of the web have many worldviews, content tagging is really relative or contextual to the user, how is it useful without the context?

Do you care to see how someone in a far away place with a different culture and language has tagged a piece of content versus someone in your block?
For consumption purposes we tend to cleave as bird of a feather, hence the social tagging that has become so popular now allows us to identify our own small islands of conformity and comfort.

Or does it. There is yet really no way to identify the context of anyone tagging content unless you have first required them to become a member of some kind of group.

Are they in my zip code or not..? Do they speak my language natively, do they have the same goals, politics and religion, how much does this matter.
We are at an interesting point here.

To make web 2.0 (to me this is simply the web that allows everyone to have some say) even more useful we will start needing to know a bit more about all those individuals that make up the wise crowd.

Effectively this will give us the means to start to classify the crowd into ever more specific groups. This happens today on some scale when Google and Yahoo make assumptions about users in order to send the most appropriate ad content their way but of course a standardized system available to all flys in the face of the anonymity of the web.

Is there a single data point that could be shared to help better understand who said what, who tagged what and who wants what..? In the USA the use of Zip code might be a good start but what about elsewhere..?
Inside the enterprise similar challenges have become one of the headaches of deploying global portals when we try and standardize personal data across groups of people whose localities have confidentiality guidelines that differ. Try publishing the home address of a German partner on the intranet and see how far you get.

I will be very interested to see how this develops given how fast the web 2.0 has pushed us forwards.

Monday, July 7, 2008

Actionable Information and the Law

We were treated to an in house session from Oz Benamram this week in the form of a demonstration plus Q&A on the KM work he has been leading over at Morrison and Foerster, AKA MoFo.

Oz, the Director of KM, has been a leading light in the Legal KM world (on this side of the ocean at least) based on the success of the monolithic approach he and his team have taken to the question of KM at a law firm or, more specifically, the search for "Actionable Information".

The end product: "Answerbase"; is a one-stop shop for everything you might wish to know about documents, people (internal and external) and matters plus a great deal more. The integration of various repositories as contextual variables and faceted search parameters uing web 2.0 techniques (XML Mashups etc) has been neatly done and done without the usual clutter that an overly rich data store usually produces.

You can see some simple elements of Answerbase in the demo linked above but I should warn you that version is now substantially behind the times based on what we saw this week. Included in the new product is the integration of further time and billing data, email contents and contact info from address books...somewhat astonishing stuff to have available for general access.

Some key elements to the strategy he has followed stood out for me:

1) Ask for forgiveness, not permission: curious about he negotiated a search of attorney's emails and contact data to enrich the Answerbase result set I asked him how it was done.

He explained that the approach was simply to get top level support, do very careful due diligence with respect to conflicts and confidentiality and then simply move ahead and show the finished result before asking for general permission. This way it was left for those objecting to find reasons not to do it when the obvious value was so high...so far this approach seems to have worked very well at MoFo.

In the example we saw, the mining of contact data had led to the development of both internal and external people searching, something I have not seen so nicely integrated anywhere else.

2) Confidentiality vs Access: at a law firm, matters are reviewed by the conflicts and ethics committee before they are even accepted. If there is a conflict and it can be managed, the appropriate conflict walls (AKA Chinese Walls) are built into the applications that will handle the matter and all the attorneys are notified and from there on it is business as usual.

It is worth noting that the majority of a big Firm's matters are not in conflict and most all work product is available TO ALL PRACTICES. This is very different to our own situation where access is kept highly restricted until seniority levels are sufficient.

Also notable here is the (not entirely explicit) fact that the entire work product of the Firm, some millions of documents, are located in the DMS and these are the work product being searched by Answerbase.

3) Partnership: Oz has partnered with a vendor in a highly synergistic and beneficial relationship that has propelled MoFo and the Vendor right to the forefront of the Legal KM / Product offering world in a way that no one else has in the legal world been able to emulate. He made the comment that early on an interview with the CIO of Bain had revealed that while they had a truly outstanding home grown system, they were in two minds as to whether the cost of the investment was really worth it given how much work went into it for just one customer.

Between legal services (many many Firms competing) and business consulting (a much smaller pool) there are probably a handful of parameters that make this equation shift, depending on your business perspective but in the case of MoFo the benefit of not having invested in the actual development seems clear. Oz has a team of three, none of whom are developers and they have delivered a world class product to the Firm that provides a compelling business advantage.

It will be very interesting to revisit Answerbase in another six months and see what new elements have been integrated.

Sunday, May 11, 2008

Data Centers, VMware and all that

It has been some time since I was directly involved in the activities in the data center or even indirectly, other than simply having to work with the teams of engineers who now manage them for deployments but I did run across this post recently and in the context of carbon footprints, Green IT and all the buzz in that space it was interesting reading and a very proper step in the right direction.

Here are a few of my observations on this subject.

VMware: The virtualization of many servers onto a single piece of hardware, has been a revolution in terms of CPU efficiency and TCO and I strongly suspect the low utilization rates in the article linked above probably have some relation to the amount of non-vm servers still running out there.

Case in point: I had a conversation with a friend recently and he described how he had taken a room of servers and moved them into a rack of 4 blades with servers to spare..amazing !

But one thing I have come to understand however is that VMware poses a bit of a problem to application development teams when they come to doing performance testing. As the servers in the Test, Alpha, Beta, QA (or whatever else you have designated your performance testing environment) are very likely VM's, as they will be in production, it is hard to gauge whether the additional application activities on the shared tested server at the time of test are also a mirror of what the production VM environment will yield.

We seem to have forgone the 'control' aspect of the lab test owing to the great desire to make all servers virtual. I have yet to discern if this will have a long term impact but it does seem to remove some of the validity of load testing results.

One other aspect of VM's: try calling your application vendor for support....90% of the time they will tell you it is not supported on VMware...Ooops!

Data Centers: I have yet to be in a data center that did not require me to wear a jacket, sweater and sometimes a coat. Why is this..? When did microprocessors get so finicky that they could not be expected to work in anything over 50F? I seem to recall the spec for these processors allows temperature around 80F with comfort and often much higher. Think of all the home PC's that operate just fine even when it is 100F outside and a relatively cool 85F inside. Perhaps there is a reliability curve at work here..?

Surely raising the data center temperature a few degrees would yield substantial savings in cooling bills?

The advent of blade servers has undoubtedly raised the ante here as they are so dense in heat producing components.

A better solution of course would be to have the cool air directed where it is needed, right into the server stacks first.

I would like to think that Intel (now powering CRAY computers by the way !) is working on chips that don't throw out all that waste heat but, that will quite a time coming and in the meantime our cooling costs are soaring.



Sunday, January 13, 2008

Are you experienced?

This is the question that is so hard to get an answer to any way other than by having a conversation with the person in question.

A colleague recently added another dimension to this by saying 'we know more than we can tell' in the context of expertise and if this is so, and I suspect it is, it makes things even more complicated.

Systems that pool expertise can include self declared expertise, experience and referral based data, all of which seek to create a rounded profile of what the individual knows and knows best.

Part of the problem with any organized structure in which expertise is classified is that it may miss the 'soft' things turn out to be very important in the overall picture. For example the individual may have as a tangential but complimentary experience that speaks very well to some expertise, or, may have lived in place where first hand experience of something local was very informative and catalytic in understanding a certain issue.

I recall hearing a presentation of this subject where the inclusion of free form and non taxonomic expertise categories was a key part to finding the right people for the job, in this case the speaker was giving the example of someone who had 'flying fox relocation' experience...something that would never have been picked up in any classification system but appeared to be very material to the needs of the searcher.

How do we build better systems to accommodate this sort of thing? For now the inclusion of free form text is a good catch all and perhaps the addition of an anonymous expertise rating system based on the experience of others with an individual. This is of course a little tricky and may be best done on a personal level within the social networking software we use to manage our interactions.

LinkedIn is probably the best expertise location system out there today but even then, you really need to read between the lines when looking at someone's profile to get a good sense of what they know and how they know it.

How LinkedIn evolves in this respect will be very interesting and, a bit like Google, this evolution will probably set many of the standards in the space going forwards.

Sunday, January 6, 2008

The Value Proposition

As often happens when the ideas people meet the accountants, there comes the question: "What is the return we get for this investment?"

In KM this has been a very difficult question to answer because so many of the value levers are quite a way removed from the source of the value and they tend to be 'soft' rather than hard in nature.

Can we really measure how much more efficient it was to complete the task at hand with access to an up to date KM repository, People-finder or other system versus doing it the old fashioned way?

We know there is value of course, but it is not easy to measure.

When we look into the value of adding Social Networking features to our KM mix, the question arises once again.

In this case we are seeking to make an investment that facilitates how people build and use Social Networks so that we can reap the benefit of facilitating these connections.

To begin with we have to start with some sort of a value proposition.

Thinking about this recently, I came across a paper from 2003 on the subject by Ronald S. Burt called "Social Origins of Good Ideas" which you can access here.

In the body of this paper I read the following which seems to speak rather well to the value proposition we seek:

"People whose networks span structural holes have early access to diverse, often contradictory, information and interpretations which gives them a competitive advantage in delivering good ideas"

This is a very nice way of defining the value that the peoples personal networks offer outside of the organizational hierarchy wherein they work.

Further to this, the writer goes on to delve a little deeper into the actual process that makes this so:

"People connected to groups beyond their own can expect to find themselves delivering valuable ideas, seeming to be gifted with creativity. This is not creativity born of deep intellectual ability. It is creativity as an import-export business. An idea mundane in one group can be a valuable insight in another"

There you have it: creativity as a trading business, informally a network becomes an exchange or a market place of ideas, the members are the brokers.

There is a certain elemental truth to this that begins to seem blindingly obvious but, like most things, has to be defined in words for it to really come into view.

The logical conclusion you can draw from this would be as follows:

Proactively facilitating Social Networks inside an organization is valuable because they provide an alternate way to link individuals across disciplines that the organizational hierarchy may not recognize.

Further to this, facilitating the mechanisms by which participants can state with a degree of clarity, context and accuracy, the things they know and the ideas they have in such a way as can be easily transmitted to others across the network is the best way to achieve a strong value proposition for the network.

One self-limiting factor in the value of the network that appears in this consideration is the tendency for people to network most with like-people.

To the extent that this happens it would appear to reduce the value of the network if we are to believe that diversity is the key to old ideas from one discipline becoming fresh new ideas in another.

It would seem likely that there is a balance to be achieved between heterogeneity and homogeneity to best achieve the most fertile network.

There may be a way we can measure this and derive some sort of a balance coefficient that would help us manage the network to keep it in it's most productive state.

Of course that's all a bit far from the day-to-day of pushing forward with the best KM systems we can build, but it does provide a fascinating way to view the value of what we are building.