Archive for gov2.0

Podcast and Screencast Results / Justification

Posted in Web2.0 Productivity with tags , , , , , , , on September 29, 2011 by Lance Strzok

So I looked over some stats with respect to the podcast and screencast work that I have been doing.

Why podcast and screencast?

Podcast – The driving factor on the podcast was primarily in understanding that there was a lot of command information coming in from across various channels. Newsletter, email, email, and announcements, internal portal, did I mention email? And to stay informed meant checking in a lot of places. The bulk of them were unclassified in nature, and could be aggregated in one location (the podcast). So why then a podcast? Part two of that question was a matter of time. Even if I knew where to look, how much time did I have to read the content of all that information? Once at work, time is usually somewhat limited, so in my quest for free time, I realized that my 1.4 hour commute was some time that I might consider sharing. As it is, I listen to a couple of stations, but for the most part, the weather, news, and market are quick, so I ended up listening to podcasts on technology and science. The point is that I gladly listened to more information because driving time was usually something I could and would easily share. Thinking this could be true for others, (average commute in DC per NPR news article is about 45 minutes each way every day), I wanted to see if people would get past the small technology barrier of getting the information from the network, onto a device that they could listen to in the car on the way to work or heading home. In this way, if we could aggregate the information for employees, and make it easy to access and listen to on time they have, they may choose to do so.

Screencast – The primary driver on the screencast was reusability. If there was a question or procedure that could be shared or demonstrated once, then people to use it to learn new skills, or be reminded of how to do it if they forget. I started to think of it as self help that people could get too before heading to the actual help desk. One of the reasons behind this was to reduce the number of classroom demonstrations I was doing, but also so I could spend my time making new content about plenty of other worthwhile topics and demonstrations. In addition to this, people could get it on demand, during their lunch break when they want to sit back in their chair and watch a “howto” video on “searching SharePoint” or one of several topics. I like to do this at home, watch a Youtube video on how to derive equations of motion while eating fried chicken. You get the point.

As for the results – just the numbers.

Over the time period of March to October;

I created roughly 21 podcasts with approximately 3445 downloads, and

I created roughly 45 screencasts, with a total of 3972 views.

On the surface, it is not apparent that I am getting the results I was looking for, and so I began to speculate about what some of the factors might be. This being driven by a recent question with regard to continuing to create them or not.

I have done a weekly podcast since about May of this year, and to date, across all the locations that I made it available, I think roughly 3200 downloads have been recorded. I have a few folks from time to time thank me for an article or two, but for the most part, those are the only numbers I can get.

I have been asking for more ideas/desired stories, in the emails that I send out with the weekly contents and to date, I have had only one person respond with a suggestion.

So what do I think were some of the challenges?

Marketing – When I asked people if they knew about it, if they were not on the weekly email list, then they did not. So I am not sure that they were being forwarded to anyone else beyond the people on the immediate list of recipients. I did not do any other marketing of my own, but in retrospect, I could have made fliers, and discussed the merits of how to effectively use it.

Accessibility – I think that having to have it on a network that required a user log in and password was a hurdle because many people just don’t want to create an account for what they view will be one benefit. Too many passwords already, and I can relate. A recommendation on this would be to grow our NIPRNET presence to allow for one log in that grants you access to email, and a few key services – one of which could be the aggregated weekly podcast.

Re-posted – I was asked to post it on a different network, and as soon as I did that, more people viewed it on the new network, but it totally defeated the premise for putting it on the original network in the first place.

Consistent – I think I lost some followership when I did not post for a week or another because I was on leave or unable to do so. This may have also been a factor.

Content – As much as I ask for ideas, I received only one in the 6 months I was making the podcasts. So the content was all original in terms of what I shared, discussed, or posted. Most of the content was stuff that employees would get in email and across disparate mechanisms, but aggregating it in the podcast seemed like a good idea.

Timing – I am not sure that our workforce today is as active in the media environment as we could be, or in my humble opinion, should be. There is also not a drive to move in that direction present, so there is only personal initiative or interest to explore alternative sharing mechanisms. Put another way, it is my belief that not many in our workforce use their smartphones to download and sync podcasts that they can listen to while they are at home. If we made this easier – it would help to demonstrate the value. I believe that over time, as more people get used to using the technology for information on demand, that this will change – but we’re just not there yet.

Now, all that being said, the question posed to me was – what kind of followership did I build up, and should this production effort be sustained?

I am afraid that I cannot answer that at this time. There is too little information to make a decision. I think the next question should be – do we market this from a leadership position, and present it as one way to aggregate information, with the option of those other mechanisms staying in place and simply using a unique identifier with those other items that would allow someone that chooses to listen to the podcast, to sort into a folder, those items that would normally end up in their input streams so that they don’t end up having to read or listen to them more than once.

As for content, the challenges that remain are getting people in those various channels of production to modify what they do only slightly to share what they are already doing, and minimize redundancy in information.

My recommendations:

With workforce input, develop a clear plan on what kind of content you want to aggregate, (added benefit of advertising this cooperatively developed product).

Host the content on the NIPRNET behind the same login as email to remove the need for separate login, and find ways to make syncing the content as easy as possible for both phone and desk/lap-top computers.

After aggregating it, tag the initial source location and products in a way that will allow people that choose to listen to the podcast to not have the information come again through the original channel – or if it does – can be auto-foldered into a location out of the workflow, (this is an effort to reduce duplication.

Revisit the content discussion on a quarterly basis, and make sure that there is a mechanism that is collocated with the download that allows for feedback and input (like – link to the podcast from a blog).

Try to get to more of an interview style podcast, not just a news podcast of someone reading the headlines. Different voices, debated views, etc… That will develop more interest and followership. In addition, if you have a section that reviews pertinent comments from the commenting mechanism – that will allow users to see how their input can effect the process and their voice can be heard.

I enjoyed the opportunity to run this experiment. I thank Jack Gumtow (CIO) for the opportunity to do this, and learn from it in doing so. I hope my sharing some of this information helps others, and I am open for questions or comments.

I would happily help anyone interested in starting or maintaining an effort similar to this one.

Cheers,

Lance Strzok

Advertisements

The Future of Writing at Work

Posted in Telecommute, Web2.0 Productivity, work with tags , , , , , , , , , , on March 30, 2011 by Lance Strzok

As more and more people are writing and professing their opinions across more and more platforms of information sharing, one thing remains true across all of them – Content it King. Yep, what you say, its validity, conciseness and tone are all part of good content that will keep people coming back. In a world where people value every second of their time, if you can not provide that content consistently, then you can make it look pretty all you want, and tweak formats all day, but that won’t bring them back to read you again.

I suspect the future of writing in the office place will shift from Word and Open Office to open platforms where the words that you write are what is most important, and computers and editors will apply style, images and links to related content to enrich the content as a workflow process following its initial creation.

This makes the transportation and transformation of the words from one product into another so much easier, and style can be changed quickly and easily for past and future content. It is also easier to use and re-use it again in other products.

Think about it, how many times does the Word file you spend half an hour tweaking just so it looks right end up in several places and different platforms looking completely different? My own experience in this has lead me to writing in blogs, because it is just so easy to do. The files are small, transportable, accessible, open with a simple browser (no special or expensive software) and have some of them have built in spell checking as I write – not as a separate function. I can write from my desktop, laptop, phone, or TV and the content can be styled in any way I or someone else pleases. Not to mention that people can index it and discover it, as well as comment on it and share it with others quickly and easily. It also fits with my hope of where things will go in the future with regard to IT and work. Simple really, all I should need is an internet connection and a browser. Which is also why my recent work has been focused on browser wars and how they are doing against one another.

So to wrap things up, spend that extra half hour working on the content, collaborating with colleagues, checking your sources, and making your inner author voice shine through, and give a blog a chance – you might just come to like it for the same reasons I do.

-Lance.

Why Podcast?

Posted in Web2.0 Productivity with tags , , , , on March 10, 2011 by Lance Strzok

So the question of “why podcast?” has come up, and I thought I would share some of the reasoning behind the decision to give the podcast medium a run.

A little background
I recall hearing that the average commute time in DC was over a half an hour. I commute about an hour and fifteen minutes a day, each way. So I usually check NPR news headlines, and about a half hour of 103.5 to catch the main stories, weather, traffic etc…

So how do I use/engage my brain for the rest of the commute? I turned to podcasts. And in doing so, I found many good sources of relevant information and news I could use to maintain situational awareness with regard to issues I am involved with at work. I am aware of the latest developments in the areas I am most concerned with, and I hear varying viewpoints on those issues from several sources over the course of a few days. I have subscribed to individual podcasts, and I use a podcast streaming service called Stitcher for some of the broader interest areas and what others in my field are sharing and talking about.

Thoughts
As I started to think more about it, I realized that if I were to compare the costs of my minutes – the minutes in the commute were pretty cheap. Cost here being the cost of what do I give up to listen to a podcast on my way to work vs what is the cost of the time I would spend reading all of that information while at work. Or put another way, what can I not do while I am locating and reading these articles or bits of information?

It dawned on me that most people are interested in the information that our communications committees are putting out across several formats and publications that include a newsletter, emails, banners, signs, internal web page, etc… But, when I thought about it, what I wanted was one source, and I wanted to move that source to less expensive minutes, otherwise – I was not likely to digest all of those different resources, and I am missing out on useful information.

Motivation
So there is was. I wanted to know those things, but they were spread out, and using expensive work minutes instead of cheap commuting minutes. (Commuter minutes, gym minutes, elevator minutes, lunch minutes etc…)

That is the motivation for consolidating those bits of information into a podcast and allowing the workforce to access the information from home, download the mp3 files to a smartphone, or mp3 player, and listen to the issues that might otherwise go unknown.

If you are interested in the mechanics of how I am creating the podcast, the previous blog entry to this covers that pretty well, and I may add another when I get to the point where I am interviewing instead of just reading the news.

Question for you – where are your cheapest minutes? I don’t think my list is big enough, and I would like to know when you listen or might listen to a podcast.

If you have any comments, or questions – please leave them below in the comments, I will respond to them and thanks for reading.

Twitter Use for Business

Posted in Web2.0 Productivity with tags , , , , on February 10, 2011 by Lance Strzok

I looked at a few government agencies that are using the Twitter service looking for use cases that I might recommend using in our organization, and with a specific purpose.

Here is the list of activities I observed on their Twitter pages:

– Post links to photos related to employees and their activities at work and in the community
– Links to articles that involve their employees or business activities
– Announcements
– Links to charity events
– Links to Podcasts and Videos
– Weather/Emergency alerts (Open/closed/late arrival/early departure/Telework etc…)
– Visitors
– Safety notices
– Travel Advisories
– Uniform Changes
– Contests
– Updates on projects or activities of the business
– Links to reference or other resources
– Job announcements
– Seeking skills or equipment announcements
– Events that influence Employees or Business Partners or Customers
– Links to business mentions in the news or other media

Of the items on the list above, I am recommending the following for my organization:

– Job announcements
– Weather/Emergency announcements
– Travel Alerts
– Events that influence Employees or Business Partners or Customers
– Links to our activities in the local community

Getting Started

– No associated costs
– Can start immediately
– Link to the Twitter feed from the Organizational Homepage on the Internet
– Start with an initial post that links to an article that describes the intended use of the feed and how employees, customers, and partners might want to follow it.

As for who should have the ability to update the posts to the business feed, I would recommend 3 people fill that role, and that requests for posts be sent to those individuals through existing chains of command pertaining to the individual submitting the request with specific words in the email subject line that identify it as a post so that it can be viewed and acted upon immediately by one of the posters (ex… Twitter Feed update request). Candidates for those three people would likely be people that already handle business announcements.

If you have other ideas, please add them to the comments below, and I’ll try to add them to the content in this article in the near future.

We can do better.

Posted in Web2.0 Productivity with tags , , , , , , , , on June 8, 2010 by Lance Strzok

No matter who you put in the DNI office, they have to be willing to address the changes that have to take place within agencies and analysts. The DNI has to be empowered to directly impact the budget of the agencies that he is trying to get to work together. Otherwise you can fire all the people you put in that seat and it won’t make a bit of difference. The public has to demand more from our leadership, and from our intelligence agencies.

To say I am disappointed would be an understatement. I am frankly disturbed with the current demonstrated lack of desire (not ability) for government agencies to truly collaborate on articles and issues regarding our national interests.

Family, friends, and fellow taxpayers deserve better from the Intelligence Community (IC) and government agencies that are sworn to guide and protect our great nation. Taxpayers pay taxes every year with the idea that the money they give to the government will be spent on programs that have well defined requirements, have little waste, and are realistic in scope and timeliness.

Watching the news we see glimpses of failures to recognize key information that was available across the various government agencies or agency databases that may have allowed a given atrocity to have been avoided. This followed by finger pointing and general denial of responsibility when something happens. I see databases at individual agencies that are created using government funds and then treated like they somehow belong to that agency. Rather then storing that information centrally where it can be searched, mashed, and relationships can be formed, they sit on servers within disparate agencies with the hope that access to those data can be logged and metrics can be made on how useful that database or information is so that a business case for its continued use can be justified. This of course inherently reduces its usefulness and timeliness and the ability for computer systems (that don’t sleep) to find relationships in mountains of data. Do agencies own their databases? Or did taxpayer dollars pay for them with the idea that they would be shared and used by all in an effort to protect our nation?

So put those mountains of data, (databases) in a central location where computers can apply artificial intelligence and pattern recognition on all of that data simultaneously and alert analysts to relationships that are found or that may exist with flags that denote a need for a given analyst to be granted access in order to find out the details of that relationship.

By moving the data into a shared environment, we can allow computers to find relationships and share those relationships and relevancy with the analysts that are interested in that information. We won’t have to rely on humans to detect it, and share it. You see, the sharing part of this is where I believe we are coming up short.

So why is it that sharing is so difficult within these communities? Well there are several reasons.

Policies – that state which organizations can share what with others, and also define the protection of databases and information.

History – of keeping secrets in the case of of the intelligence community. A long history of doing our best to keep secrets and protect databases of information under terms like “national security”, or “need to know”. These ideas served us well, but are they actually working? I would argue that they are not as effective as we may imagine, and that we may want to start to outpace our adversaries rather then spend so much time and effort trying to protect every bit of information so zealously. That is an entire debate that deserves another post all together.

Culture – where the people that know information seem to have more value and bring more value to an organization. Knowledge is power, and your pay is based on what you know and what you bring to the table. Rather then what you know and how you share it in ways that others can benefit from it. This continues to be a problem, fueled with a pay for performance system that (if done incorrectly) could lead to ever tighter lips when it comes to sharing.

In short, we will have to address the policies, historical vs current sharing ideology, and the culture of perceived value in knowledge sharing vs knowledge hording and the value that either idea brings to an organization.

Once we have the culture of appropriate sharing, shared situational awareness on items of interest within a community of interest, and technology supporting the sharing the awareness across unified data stores then we may see a more realistic environment for stopping future attempts at causing the US harm.

Another area ripe for improvement is where do we write about the things we know and understand?

Currently, each agency has its own process for vetting and releasing reports or products that get some sort of seal of approval (which just means it completed a vetting process that can be as shallow as one person deep). They also each have a production group, or division of folks that move these products through a process, then publish them to some server (again, may or may not be seachable or indexed). By the time the information has gone through the process, the information may be a little old, or been overcome by events. This group and process is intended to bring a sense of authority to the documents, and once the document or information has the command seal added, it is available to the rest of the consumers to apply to their problem set. These reports are now something that can be referenced and in some cases, only these documents can be used or referenced for making decisions with regard to acquisition. This is another area where we need to take a good look at policy and see if there is room for a joint product, not just agency products that can get a seal of approval.

The idea that the smartest people on any given topic exist in one building is just not realistic. acquisition communities should be able to find joint products that reflect what communities of interest have to say about the topic at hand. They should not have to be bound to one agencies opinion, but able to use the opinion of the members across the community that work that issue. Simply put, if I offered you a report by one agency that has 4 people that looked it over and contributed to it, and one that an entire community worked on collaboratively to create, which one would you choose?

So the question always comes up on the vetting process for these collaborative documents. What rigor is there? What process? How can the consumer know that a given product has any more or fewer errors then a product created by a single agency and put through their process? Put another way, how can we know that a product that had 15 contributors from across the community and was read by many more as it was being created is any more accurate for making decisions then one that is created by 4 people at a single agency that goes through that agencies process?

Bottom line, we need to demand that our Intelligence Community act more like a community than a group of competing agencies, and empower those that are trying to change the culture of collaboration and analysis from agency specific to that of one IC supporting decision makers. Not 16 agencies trying to tell their own version of the story. Huge change has to take place, and it won’t happen unless the public is demanding it. Otherwise, no matter who you put in the DNI’s chair, it won’t matter because the agencies can just wait him or her out and go on with business as usual. So empower the DNI to directly impact budgets, and force documentation of actual collaboration and proven steps of change with embedded liaisons. Make intelligence production occur in a collaborative space that is open to all of the people that work that issue and have the appropriate credentials to work with that information at the lowest level possible. Take production down to the analysts level, and have it created and published in an open, accessible, collaborative forum. Build communities of interest, foster and reward superior contributions and products that have the touch of many hands and minds.

These are real, and achievable steps that we can take to move us toward a more focused and efficient Intelligence apparatus.

Constructive comments always appreciated.

Why a joint publishing environment?

Posted in Web2.0 Productivity with tags , , , , , , , , , , , , , , on May 9, 2010 by Lance Strzok

The urgency on this issue is because everyday that passes, another “collaboration site” gets created within our enterprise (government) which serves to divide collaborators that work specific topics.

This is bad because for fast, accurate, and rich content, we want the greatest number of collaborators to apply their considerable depth of knowledge to fewer products and knowledge bases which enable decision makers (political or tactical) to make the most informed decisions as quickly as possible.

Example situation:

Twenty people across the enterprise (DOD, IC, and other governmental bodies with access to the network) have expertise on a subject, but are not necessarily geographically located near one another.

From blog_pics

Twenty people (collaborators) across five companies or (agencies) that typically write on a given subject or topic. Four people at each of the five companies.

From blog_pics

Each of the five companies creates its own collaborative environment for its local employees with some limited ability to share with external collaborators. This could be a Mediawiki site, Sharepoint site, Lotus Notes, or any similar collaboration environment (Collaorative software list).

Each of the four members at each of the five companies use their companies collaborative environment to collaborate on their individual product on the same topic.

From blog_pics

Five “collaborative products” are created, with four primary contributors to each product.

A decision maker (political or tactical) may receive all five products on which to make a decision, and the burden of analysis is put on the decision maker (with less expertise on a topic) rather than on the community of practice where that expertise exists.

From blog_pics

What we want to do is put one product in front of a decision maker that represents the collaborative efforts of the community of practice on that topic (all twenty people), and allow them to make decisions based on that information. The richness and depth of knowledge applied to one document where the differences and facts are agreed upon (or highlighted when not) and available as a product and a living knowledge resource.

From blog_pics

Although there are several publishing and knowledge management products in use across the services and agencies, many of these systems are not shared, nor do they allow for collaboration outside of their component in an effective way. The data and products as well as those items in production are not discoverable by the other components and the costs to maintain each of these systems is considerable. Even if each component wanted to share their databases and information, it would be technically challenging based on the varied systems in use.

Intellipublia is authoring and knowledge management software that enables joint production of products and knowledge management on topics across the entire enterprise (where the enterprise contains all of the agencies, commands and DOD components). All of which can use Intellipublia to create component specific products, or collaborate on joint products. Additionally, members at any component can discover, and contribute or comment on any product that is in draft, or completed.

Intellipublia takes the worldwide scalability of Wikipedia (Mediawiki software) and has been modified to work as a production system that has many features of modern production expectations“.

Intellipublia is operational, as well as still accepting requirements for improvement.

The most notable features are:
* Web based and accessible from any computer on the network
* Scalable to millions of users
* Changes are tracked and attributable and commented
* Notification mechanisms for various aspects of user activities
* Produce validated XML for registration with the Library of National Intelligence IAW ICD 501
* Static html output for local server usage
* Searchable, linkable, taggable, extensible, and has RSS output

In conclusion, I wish to convey that within an enterprise as large as ours, where knowledge on any topic exists in more then one component, it is imperative that we drive collaborators to fewer collaborative spaces in order to maximize collaborative effects and achieve decision superiority while reducing duplication in both products and knowledge databases. This means making a joint decision on which environments we are going to use, followed with how we will integrate them, regardless of the environment or software tools that we settle on.

As always, thank you for reading, and I would appreciate your candid and constructive feedback.

How do we move away from email?

Posted in Web2.0 Productivity with tags , , , , , , , , on March 31, 2010 by Lance Strzok

I started this thread as a response to Andrew McAfee’s blog. http://andrewmcafee.org/2009/10/how-i-learned-to-stop-worrying-and-love-email/ .

There I shared the following thoughts on moving away from email:

I am living the truce with email, but I do think that email will act like a ball and chain on moving toward what could be, and what I think we agree will eventually be.

I think that the mindset for email should be as one to be used as a private communications path, with suggested replacement when possible with private chat and private messaging within chat for asynchronous discussion.

I think one thing we could do to move willing organizations toward limiting email and moving in the direction of other tools would be to disable attachments within email. Replacing them with links to documents in a document management system that is optimized for the media being linked too, (be it images, documents, video etc…). There are some added side benefits to this decision, reduction of the number of the same documents and the associated confusion over updates versions, and changes. There are other benefits, but I won’t go on about that.

A follow on move may be to declare that email will begin to be indexed and made searchable/discoverable unless it is flagged as personal and private. Encouraging employees to use private chat and chat messages for most of the personal exchanges that take place. This would enable us to start to use the email text strings (now without actual documents embedded). Maybe then email might not be “where knowledge goes to die” as you so appropriately put it. These emails (now text files) can indexed along with chat room logs (non private) and other text based tools as well. One additional thing would be that it may basically force a lot of people to review what they have, and delete those that are no longer worthwhile, thereby reducing total storage allocated to email from 20 years ago. (Can you believe some people are proud of that fact?)

The other uses of email would eventually need to be replaced with arguably better tools as well. Take for example the task list function, or the integrated calendar, meeting makers and the rest of the functionality we have come to love. Until we can point to a better solution in those areas as well, this is going to continue to be an uphill battle.

Then there are the customers and clients, we can change our internal methods and processes, but what about how we interact with our customers?

All good questions, but I just realized I started this long ago, and forgot to publish and finish it (busy).

Lastly, I will say that if you or your customers like Firefox as your browser, then linking your documents to Sharepoint is not the direction to go. They are only open for editing in Internet Explorer.