Archive for process

The Future of Writing at Work

Posted in Telecommute, Web2.0 Productivity, work with tags , , , , , , , , , , on March 30, 2011 by Lance Strzok

As more and more people are writing and professing their opinions across more and more platforms of information sharing, one thing remains true across all of them – Content it King. Yep, what you say, its validity, conciseness and tone are all part of good content that will keep people coming back. In a world where people value every second of their time, if you can not provide that content consistently, then you can make it look pretty all you want, and tweak formats all day, but that won’t bring them back to read you again.

I suspect the future of writing in the office place will shift from Word and Open Office to open platforms where the words that you write are what is most important, and computers and editors will apply style, images and links to related content to enrich the content as a workflow process following its initial creation.

This makes the transportation and transformation of the words from one product into another so much easier, and style can be changed quickly and easily for past and future content. It is also easier to use and re-use it again in other products.

Think about it, how many times does the Word file you spend half an hour tweaking just so it looks right end up in several places and different platforms looking completely different? My own experience in this has lead me to writing in blogs, because it is just so easy to do. The files are small, transportable, accessible, open with a simple browser (no special or expensive software) and have some of them have built in spell checking as I write – not as a separate function. I can write from my desktop, laptop, phone, or TV and the content can be styled in any way I or someone else pleases. Not to mention that people can index it and discover it, as well as comment on it and share it with others quickly and easily. It also fits with my hope of where things will go in the future with regard to IT and work. Simple really, all I should need is an internet connection and a browser. Which is also why my recent work has been focused on browser wars and how they are doing against one another.

So to wrap things up, spend that extra half hour working on the content, collaborating with colleagues, checking your sources, and making your inner author voice shine through, and give a blog a chance – you might just come to like it for the same reasons I do.

-Lance.

My Podcast Process and Thoughts

Posted in Web2.0 Productivity with tags , , , , , , on March 6, 2011 by Lance Strzok

So here are some lessons learned from my recent podcast.

Content gathering
– Gathered from various sources, and be sure to get the source information for each part
* Official Emails
* Company portal highlights
* Interviews with people
* Newsletters (internal and external)
* Questions
* Ask for content from Social Media sources
* Send email request for input with links to pages
* Make some phone calls to personally invite someone to interview with you
* RSS feed for items that matter to everyone
– I put all of the content into a shownotes page on a wiki for the production end of things and invite (encourage) others to begin to edit there, otherwise I just put the content in myself from email, or whatever source they are sending it to me from
– After writing it all down and smoothing it over for speaking it aloud, I am ready to record
– Create a section at the beginning in which you mention the contents of the episode, and the date so that listeners may choose to listen or skip that particular podcast (thank you readers for that feedback)

Recording
– I started to record them as mp3 files with a Zoom H1 hand held recorder, but now under lessons learned, I will save them as wave files since the Levelator tool provided by the Conversations Network takes that as an input later in the process and I want to reduce the number of conversions (which only add noise as evidenced in the first podcast). I use the 48khz sample rate with 16 bit because it is the best that can be converted by the Levelator or converted to mp3. I also use the autolevel setting on the back, as well as the low cut on.

During Recording
– Have a glass of something you like to drink near by
– I don’t mind making a long recording, just make sure that if you make a mistake while recording, to pause, regain composure, pick the spot you wan to redo, and after a noticeably long enough time start the section over. In this way when editing, you will clearly see a long pause that will indicate the location of the edit. (Thanks to that tip from Robert and Tiffany Rapplean from their podcast – Intellectual Icebergs)
– Find a quite place, and give some thought to the room you are in with regard to sound waves and how they will arrive at the microphone as well as materials that will absorb sound

Save Raw Recording
– Save the raw recording before doing anything else and store in a folder

Levelator
– You can use the Levelator tool to even out the different levels in the sound file and bring it to a consistent output sound level so that episodes are generally equal from show to show
Editing
– I use Audacity to Edit the wave files, (again, switching to wave files to reduce the number of conversions that reduce sound quality)
– Edit out the bad sections and shorten up long pauses
– Add intro and extro music or words as desired
– Insert commercials as desired (I don’t do this – yet)
– There are other resources within Audacity to do more editing
– Save this file as an edited wave file so that if you have to add sections (insert additional entries), that will be easy


Convert
– Now using Audacity export the edited file as an mp3 file for upload to the server

Last Listen
– Give the show the last listen while following along with the shownotes
– Make sure all the content that is in the show notes is represented (I forgot a section in my first podcast)
– Make time hacks in the shownotes so that if people want to skip to a section, they can do so

Upload Link and Market
– Upload the file to the host server
– Copy the show notes and time hacks from the wiki page where they were created into a blog post and email for linking and feedback
– Make links wherever possible in the shownotes to sources and important nouns
– Link to the podcast, shownotes, and feedback from various locations
– Include a link to subscribe to the podcast or updates when possible
– Post the blog, and verify that all the links work – if not, fix them
– Let your users know that there is a new podcast available with a link to automatically download the mp3 file

– email the podcast distro list you may have and include in the email the time hacks and topics for the show

Follow up on Feedback
– Make sure you stay engaged and follow up on feedback that comes back to you on the blog

If you have additional thoughts on improving this process, please let me know, I aim to make it better as I go, and thanks for your thoughts in advance.

Sharing my Screencasting Process

Posted in Web2.0 Productivity, work with tags , , , , , , on October 4, 2010 by Lance Strzok

1) Record screencast with CamStudio version 2.0 and the “CamStudio lossless codec” that can both be downloaded at the link provided.

2) Save as an AVI file from within Camstudio.

3) After capturing the screensession, open the file with Anyvideoconverter and save as an mp4 file.

4) Open with AVIdemux for editing and save as mp4.

(The Anyvideoconverter and AVIdemux sofware steps 3-4 can now be replaced with Freemake Video Converter  Which can do the conversion and is a nice editor.)

In a little more detail, (thanks Karen),

On a computer, open the software tool called CamStudio. This tool allows the user to take a screen capture an estimated thirty frames per second and also captures audio.
Open the software that is going to be demonstrated or open the target software, such as Microsoft Excel 2007 or Microsoft Word 2007

In CamStudio, configure the settings for optimal capture of the software activities, as in this particular case, the steps in how to use Microsoft Excel 2007 or Microsoft Word 2007.

After the optimization, start the recording and begin the software demonstration. (As a side note, if a mistake is made, do not stop recording. Pause yourself and take a deep breath.

Gather your thoughts and start again at a point just before you made the mistake.

When the demonstration is complete, press the stop button in CamStudio to stop the recording.

Save the file as it is as an AVI file.

Convert the AVI file into a manageable file size by using another software tool named “Any Video Converter” to convert the AVI file into a MP4 file. This conversion can reduce the file size by ten to twenty times its size.

After the conversion, open the MP4 file with another software tool named Avidemux, for editing.

Edit out any mistakes made in the recording and save the file as a MP4 file.

Close the MP4 file and open the canned introduction recording.

Append the recent software demonstration recording to the introduction recording.

Append the canned closing to the software demonstration recording.

Save the now merged three parts of the recording, (introduction, demonstration, closing) as one MP4 file.

Distribute the learning video to the appropriate site for others to view.

You must know-
How to use and configure CamStudio, “Any Video Converter”, and Avidemux
Have the knowledge on audio and video codec’s to properly configure the three software tools mentioned above.
Know the software activity or activities that are going to be demonstrated.

—-

If you want the exact settings I  use look here:  https://gstrzok.wordpress.com/2011/04/09/my-screencasti…s-and-software/

Cheers.

We can do better.

Posted in Web2.0 Productivity with tags , , , , , , , , on June 8, 2010 by Lance Strzok

No matter who you put in the DNI office, they have to be willing to address the changes that have to take place within agencies and analysts. The DNI has to be empowered to directly impact the budget of the agencies that he is trying to get to work together. Otherwise you can fire all the people you put in that seat and it won’t make a bit of difference. The public has to demand more from our leadership, and from our intelligence agencies.

To say I am disappointed would be an understatement. I am frankly disturbed with the current demonstrated lack of desire (not ability) for government agencies to truly collaborate on articles and issues regarding our national interests.

Family, friends, and fellow taxpayers deserve better from the Intelligence Community (IC) and government agencies that are sworn to guide and protect our great nation. Taxpayers pay taxes every year with the idea that the money they give to the government will be spent on programs that have well defined requirements, have little waste, and are realistic in scope and timeliness.

Watching the news we see glimpses of failures to recognize key information that was available across the various government agencies or agency databases that may have allowed a given atrocity to have been avoided. This followed by finger pointing and general denial of responsibility when something happens. I see databases at individual agencies that are created using government funds and then treated like they somehow belong to that agency. Rather then storing that information centrally where it can be searched, mashed, and relationships can be formed, they sit on servers within disparate agencies with the hope that access to those data can be logged and metrics can be made on how useful that database or information is so that a business case for its continued use can be justified. This of course inherently reduces its usefulness and timeliness and the ability for computer systems (that don’t sleep) to find relationships in mountains of data. Do agencies own their databases? Or did taxpayer dollars pay for them with the idea that they would be shared and used by all in an effort to protect our nation?

So put those mountains of data, (databases) in a central location where computers can apply artificial intelligence and pattern recognition on all of that data simultaneously and alert analysts to relationships that are found or that may exist with flags that denote a need for a given analyst to be granted access in order to find out the details of that relationship.

By moving the data into a shared environment, we can allow computers to find relationships and share those relationships and relevancy with the analysts that are interested in that information. We won’t have to rely on humans to detect it, and share it. You see, the sharing part of this is where I believe we are coming up short.

So why is it that sharing is so difficult within these communities? Well there are several reasons.

Policies – that state which organizations can share what with others, and also define the protection of databases and information.

History – of keeping secrets in the case of of the intelligence community. A long history of doing our best to keep secrets and protect databases of information under terms like “national security”, or “need to know”. These ideas served us well, but are they actually working? I would argue that they are not as effective as we may imagine, and that we may want to start to outpace our adversaries rather then spend so much time and effort trying to protect every bit of information so zealously. That is an entire debate that deserves another post all together.

Culture – where the people that know information seem to have more value and bring more value to an organization. Knowledge is power, and your pay is based on what you know and what you bring to the table. Rather then what you know and how you share it in ways that others can benefit from it. This continues to be a problem, fueled with a pay for performance system that (if done incorrectly) could lead to ever tighter lips when it comes to sharing.

In short, we will have to address the policies, historical vs current sharing ideology, and the culture of perceived value in knowledge sharing vs knowledge hording and the value that either idea brings to an organization.

Once we have the culture of appropriate sharing, shared situational awareness on items of interest within a community of interest, and technology supporting the sharing the awareness across unified data stores then we may see a more realistic environment for stopping future attempts at causing the US harm.

Another area ripe for improvement is where do we write about the things we know and understand?

Currently, each agency has its own process for vetting and releasing reports or products that get some sort of seal of approval (which just means it completed a vetting process that can be as shallow as one person deep). They also each have a production group, or division of folks that move these products through a process, then publish them to some server (again, may or may not be seachable or indexed). By the time the information has gone through the process, the information may be a little old, or been overcome by events. This group and process is intended to bring a sense of authority to the documents, and once the document or information has the command seal added, it is available to the rest of the consumers to apply to their problem set. These reports are now something that can be referenced and in some cases, only these documents can be used or referenced for making decisions with regard to acquisition. This is another area where we need to take a good look at policy and see if there is room for a joint product, not just agency products that can get a seal of approval.

The idea that the smartest people on any given topic exist in one building is just not realistic. acquisition communities should be able to find joint products that reflect what communities of interest have to say about the topic at hand. They should not have to be bound to one agencies opinion, but able to use the opinion of the members across the community that work that issue. Simply put, if I offered you a report by one agency that has 4 people that looked it over and contributed to it, and one that an entire community worked on collaboratively to create, which one would you choose?

So the question always comes up on the vetting process for these collaborative documents. What rigor is there? What process? How can the consumer know that a given product has any more or fewer errors then a product created by a single agency and put through their process? Put another way, how can we know that a product that had 15 contributors from across the community and was read by many more as it was being created is any more accurate for making decisions then one that is created by 4 people at a single agency that goes through that agencies process?

Bottom line, we need to demand that our Intelligence Community act more like a community than a group of competing agencies, and empower those that are trying to change the culture of collaboration and analysis from agency specific to that of one IC supporting decision makers. Not 16 agencies trying to tell their own version of the story. Huge change has to take place, and it won’t happen unless the public is demanding it. Otherwise, no matter who you put in the DNI’s chair, it won’t matter because the agencies can just wait him or her out and go on with business as usual. So empower the DNI to directly impact budgets, and force documentation of actual collaboration and proven steps of change with embedded liaisons. Make intelligence production occur in a collaborative space that is open to all of the people that work that issue and have the appropriate credentials to work with that information at the lowest level possible. Take production down to the analysts level, and have it created and published in an open, accessible, collaborative forum. Build communities of interest, foster and reward superior contributions and products that have the touch of many hands and minds.

These are real, and achievable steps that we can take to move us toward a more focused and efficient Intelligence apparatus.

Constructive comments always appreciated.