Destination Innovation

Who doesn't have this problem-- an information need, access to thousands of files, and no clue where to start? For me, the intersection of these three things always sparks waves of deja vu coupled with some pointed self-feedback on the need to better organize my stuff once and for all.  Dammit.

Magnify this problem by thousands of people, decades, and dozens of administration and policy shifts and you get a sense of the challenge many agencies face with information access and knowledge management.  So, the Post's announcement of LMI's Destination Innovation award* for OpenPolicy-- an in-house developed, search tool-- caught my eye.

Through program manager Gus Creedon, they've refined and advanced text-based search capabilities. But, perhaps more importantly, they've developed an approach to help clients think through a framework and taxonomy that enables better returns on keywords. They call it ontology.  Whatever.  The goal is to help federal programs that need something (like tomorrow) better access their own organization's working papers, key decisions, and track record on a topic.

Seems cool and handy-- though I can't think of a client problem off the top of my head that would benefit.  Maybe that's not exactly true.  Maybe you have to get more creative.  I am working for a client facing tremendous turn-over because of retirements.  This might be part of the solution for ensuring continuity.  Another use might be more within the science-y or academic organizations to help them sort through past research papers and make sense of someone else's interpretatation of old data.

In any event, I love seeing organizations put themselves out there with new tools.  Congratulations to Gus and LMI. 

*LMI won Destination Innovation in the government category.  The contest is cosponsored by the Northern Virginia Technology Council and the Washington Post. Click here and follow the link in the upper right to hear Gus talk about his vision.

Another perspective on performance-based pay

A couple of months ago I wrote this post on performance-based pay.  Like many (at least on the outside of government looking in), this proposed change seemed like a no-brainer. 

Most of us on the private-sector side are highly accustomed to working within performance-based systems and can't really imagine life any other way. And after working with so many stellar federal employees, it's hard not to feel the unfairness of the traditional General Schedule (GS) approach and want something better for them.  Well, it turns out that many see needed improvements in the current system but a bold move towards performance-based pay ain't it.

In fact, there are some real concerns about performance-based pay systems (within the federal environment)-- specifically around how they're implemented and executed. Jeff Neal outlines a number of issues in his interview with Federal News Radio and this related blog post.

I've had the pleasure of chatting with Jeff on this topic and can honestly say I learned something.  Jeff is a true guru on federal HR stuff.  Not only does he make a lot of sense, he's pretty funny too-- a killer (but rare?) combo.  Check out more of Jeff's work on his blog at

New Innovation Lab planned at USAID

Innovative techniques and breakthrough technologies are often (maybe too often?) aimed at increasing convenience, efficiency, or productivity on a very personal, micro scale.

So, this is exciting news coming out of the US Agency for International Development (USAID). They’re organizing previously ad hoc or disparate research and development investments into an innovation group. Way cool.  I’d personally love to see a focus on clean water and infrastructure development but they haven’t yet called to ask me for my opinion.

From an organizational perspective, in the best of worlds, this new group would bring leverage, visibility, and accountability to the process of solving these tough, intractable global problems.  We're all too familiar with the momentum-crushing pitfalls to avoid.  Rigorous approval gates, subjecting proposed projects to changing political moods, and initiating some “one size fits all” performance measurement approach will hamper more than help. 

Perhaps it’s counter-intuitive but intentionally underfunding (by traditional yardsticks) the management and oversight functions might be a path to early—if not long-term—success.  In addition to being cutting edge in their approach, I hope USAID looks for light, agile organization and management structures to match.

Getting Proactive (and Qualitative) with Return on Investment (ROI)

Just reading that will give some people hives.  I know.  Sorry.  There are the hardcore analysts out there that will pursue a quantitative approach to ROI—at the exclusion of all others. The unfortunate impact of this mindset is that we forego any efforts to evaluate the return because we don’t have data and, in doing so, inadvertently create a bigger problem. 

With most projects, the sponsor, benefitting community, or passive observers (Congress, OMB, etc.) will let an investment go for some period of time until…. Wham!  Someone, somewhere raises a seemingly innocent question in a meeting that goes something like, “So, what are getting out of all this effort and money spent?” 

At first blush, it seems straightforward and reasonable but sends shockwaves out to the program/project team.  They know that coming back with an answer that seems satisfying once you get to this point is nearly impossible.

I’m working with a federal client now who wants to/needs to evaluate the return on an internally-developed training program. Based on informal participant and manager feedback, the course is actually pretty good.  However with budgets being what they are, all special projects within this program are being evaluated.  Should they continue to invest or cut it and refocus on the funds? 

So in reactive mode, the team is scrambling a bit to come up with some numbers that tell the story in the most compelling, irrefutable way possible.  It’s not easy and not because the investment itself is bad.  The numbers just don’t really exist—at least, not in the way we’d ideally like them to right about now.

An alternative approach is to start gathering snippets of feedback, industry news/trends, and informal observations right after project launch.  Let them accumulate for a couple weeks then convert them to a living, evolving document that demonstrates the value (or not) of the project.  Having something like this handy when the inevitable questions come up is incredibly compelling in and of itself. 

While doing this, we need to be completely upfront (and proud even?) of the fact that, “yeah, I don’t have the hard numbers (after all, everyone knows that you don’t anyway) but look at the kind of feedback we’re getting.”  Being proactive and telling the story early and consistently is critical.

It seems so fitting that ROI also translates to “king” in French, non?

Call for investigation into parks

I didn't but probably should have guessed this was coming. Three senators sent a letter yesterday to the Government Accountability Office (GAO) requesting a closer look at National Park Service operations. 

The inquiry stems from widespread concern over the growing deferred maintenance backlog and questions regarding how the efficiently the park system is organized. Not surprisingly, Senator Coburn is one of the three-- his suspicions on the root causes of the backlog and other NPS issues shine through in his October 2013 report, Parked.

If the GAO takes this on, I hope they take a holistic look at the numerous challenges facing parks. Decades of underfunding, lack of certainty in federal priorities, and environmental and economic shifts have all taken a toll on the assets and the experience.  And greater still, one of the biggest issues in ensuring that the more than 400 unites across the country from Yosemite to Monocacy are relevant to and valued by future generations.

NPS is all over this in a good way-- something families, communities, schools, and Congress all could and should jump on board to help.

Anonymous Consumption

Today, I was counting on my seat warmers to serve double duty and smooth some (not all, I’d already scaled back my expectations) of the pronounced wrinkles in my skirt.  It was that kind of morning.  Who am I kidding?  It’s that kind of life.

I'm glad to report that I did make it to the gym, though.  There, I overheard a conversation that made me laugh and think a little. Here’s the background…one of the gym’s dryers (they have several, I understand) is broken. One machine down is hampering their ability to keep up with the laundry. To address this issue, gym staff have been dolling out shower towels directly from the front desk, as opposed to, stacking them on the open shelves in the locker room. This work-around has been completely fine, I’d guess, with just about everyone.  I was wrong.

After a slow slog on the treadmill, a fellow worker-outer launched into a monologue on the (exaggerated) magnitude and duration of the problem, personal inconvenience, and lack of attention to critical services on the part of gym management.  She might have been having a bad day or just have flair for complaining but I had to really rack my brain to figure out why she was mad. 

The thing I landed on was this:  Passing out towels from the front desk removed her ability to use as many as she wanted.  The staff wouldn’t flinch if you asked for two but I suppose you might have gotten a funny look if you asked for five.

I figure that most people try to align their public consumption with what they think others expect or accept.  It’s part of how we fit into our culture and society. We binge on everything from TV to Girl Scout cookies to gym towels in private—and fiercely guard these tendencies as deeply personal.

What any of this has to do with management consulting, I have no idea.  An attempt to tie them together would be forced so I’ll spare you. 

I do, however, hope you have a great weekend and get out and soak up as much 60 degree weather as you can stand.

yo data data

Exciting news for my fellow earth-loving, data devotees! 

The White House has put out gobs of climate change data out with the promise for more.  Collected primarily from NASA, NOAA, and DOD, the agencies and administration are encouraging people to use it.  Cool (and warming, as the case may be)!

Tide change in Newport, RI, Robin Camarote, 2011

Tide change in Newport, RI, Robin Camarote, 2011

They're also hosting an innovative challenge focused on flooding in coastal communities.  (Yes, the site is lame but maybe that's a test? Could be that they intend the "prize" to go to whoever can figure out what they want and when its due.)  Anyway, I'm looking forward to seeing what our local DC tech/big data companies such as Earth Networks,  PlanetiQ, and maybe even Opower come up with. This could also be a great opportunity for some of the more management consulting-y firms with some science power to differentiate themselves-- places like Cadmus could go bananas (assuming any of them could spare the downtime).

When it comes to big data, the challenge I see is on the front end and coming up with an intriguing, potential useful question.  Too often, analysts get excited about building the model (and, believe me, I've been seduced by some sexy graphic visualizations too) without putting as much consideration into what they actually want to know.

What not to measure

3D printed shoe from Continuum Fashion (not yet available at Zappos but soon?)

In the time spent diving into performance measures recently, I’ve learned that what you don’t count is as impactful as what you do. The cool kids are often overheard saying, “you get what you measure.”  And, it’s so true!

We all live with—often subconsciously—this powerful behavioral driver that is rooted in key performance measures. We adjust our priorities, actions, and attitudes to align with what we think management wants (as interpreted by preparing their monthly reports). This is the whole point of performance measures, right?  Organizations clearly articulate what’s important and, in turn, staff gradually align their behaviors over time to create those outcomes. Perfect symmetry.

Unfortunately, many organizations have learned too late that there are unintended consequences associated with certain measures.  For example, a focus on reducing processing time can drive down customer service, an emphasis on entering volumes of data can lead to entry errors, or INSERT YOUR OWN ORGANIZATION'S INTENDED VS. ACTUAL OUTCOME HERE.

In fact, I read an article earlier today highlighting some best practices at Zappos.  As many of you know, I'm a fan and all too frequent customer. (On a related note, I'll share more next week on my much-needed efforts to streamline my closet.)  Anyhoo, Zappos is well regarded for their commitment and success in customer service.  As someone who's logged some time chatting with their reps at 11pm, I can attest personally to their fantastic-ness.  So, it surprised me a little—mostly because I’d never thought about it before-- that Zappos doesn't measure call volume or duration for their customer service reps.  In fact, they do the opposite.  Their stated goal is to increase positive customer contact. They actually reward staff for having meaningful exchanges with customers-- regardless of how long the call took.  Huh. More expensive but worth it.

All of this is a long way of saying, when you look at your measures-- think through the likely behavioral impacts then weed out the ones that you think could result in short changing customers (in the name of greater productivity) or data errors (in the name of faster data entry).