Friday, September 28, 2012

OpenELIS Cote d'Ivoire Progress


I wanted to take a moment to update you on our progress on the priorities for this upcoming release of OpenELIS in Cote d’Ivoire.  For release of OpenELIS 2.8, we are focused on finalizing functionality for LNSP laboratory based on the validation testing and additional requirements we received from our trip in July.  In addition to that, we have been hard at work on the following additional high priority items:

       Audit Trail (traceability) of all steps through the workflow, status changes, and values for orders, samples, patients, tests, results, and reports

More on the specifics of the audit trail are written up in several blog posts here:  http://openelis.blogspot.com/search/label/audit-trail    (Your feedback to improve this feature is more than welcome!)

       System stability improvements, including the elimination of the “it’s not your fault” issues, and an improved standard server build method that will drastically ease and improve the installation of OpenELIS
       Dynamic selection of reflex test algorithms and tests within the algorithm
       Non-conformity reporting and workflow improvements
       Making calculated values display as text rather than as a data entry field
       Additional analyzer interfaces
       Updating test catalogs for LNSP and IPCI
       Updating meta-data for lookup values for LNSP and IPCI
       Overall usability improvements
       Text revisions/corrections and bug fixes

We have decided to delay the OpenELIS 2.8 release in order to give us additional time for testing of the finalized features for this release, and to align it better with our plan for training in the last of October/first of November.  We plan to release 2.8 at end of day on Friday, 26th October.

In order to not delay urgent improvements though, we released OpenELIS 2.7.3 as an interim patch for Retro-CI specifically.  This release addressed the issue with CD3 results not appearing in biological validation after they were accepted by the lab technician.  I wrote a blog post about our troubleshooting of this issue and the resolution with the release:  http://openelis.blogspot.com/2012/09/superstitious-behaviors-in.html

Lastly, for up to date write-ups on our work on OpenELIS, you can read the blog:  http://openelis.blogspot.com and view our roadmap: http://tinyurl.com/OpenELISreleases.  You can also receive any new postings to the blog automatically in your email.  Just subscribe with your email address using the field titled “Follow By Email” on the right hand side of the screen on the blog.  In addition, we’d love to hear your feedback for what kind of information you would like us to write about on this blog to keep you informed and have your comments on!

Tuesday, September 25, 2012

Superstitious behaviors in troubleshooting bugs!

Our users at Retro-CI alerted us to a bug that needed an urgent release from us to patch.  

The bug originally appeared as though when a user accepted CD3 and CD4 results automatically imported from an analyzer, only the CD4 results then appeared in the biological validation screen.  Normally all accepted results should then appear in the validation screen after acceptance by a technician.   When the user checked the manual results entry screen, the already accepted CD3 results appeared on that screen ready for results entry, but with the imported value already entered.  To solve this problem and get the results to show up in validation, it initially appeared that the user would need to select a checkbox to flag as "analyzer result".  Once this checkbox was checked for this rogue result, the result suddenly appeared on the biological validation screen.  Since checking this box led to the result suddenly appearing in the proper spot, the assumption that this was the behavior that fixed the problem with the result.

However, sometimes when an outcome is the desired result, it is easy to believe the last action taken caused that result -- like superstitious beliefs -- so you repeat that action again and again hoping for the same outcome.  Like throwing spilled salt over your shoulder to make good things happen in life, users will continue to interact with software in a certain manner believing that it makes the software work in a certain way.  In troubleshooting bugs, it's also easy to focus on some user action as the fix (or the cause) of some outcome and sometimes make it difficult to determine the true issue at hand.

In this case, we knew that the fact the result got stuck in results entry rather than moving to validation step had something to do with the "state" of the test result.  As samples move through the workflow in OpenELIS, the software keeps track of the status or state of the sample, the tests ordered, and the results of those tests in the workflow.  So when a test result first enters the system, it appears as a state of "not accepted by technician".  After the technician accepts the analyzer import of the results, those results change to a state of "accepted by technician".  Only those results with a state of "accepted by technician" will appear in validation.

We could see that once a technician accepted the result, the state for the CD3 results did not change and still showed "not accepted by technician" - even though we knew for a fact that it was. We knew from the bug report that users believed that the actual clicking of this checkbox was what was changing the state to "accepted" so it would move into validation.  But after some troubleshooting, we realized that it was actually the changing of the record within the results entry screen that triggered the state change.  That could include checking the checkbox, changing the results value, or anything on that results entry screen would cause the record to be updated upon saving and thus trigger the state change.  So it was superstitious behavior because the checkbox really had nothing to do with the state change.

As an additional note of the troubleshooting, we realized it wasn't only with CD3 results, but users only saw this because CD3 was always 1st on the list of the results and it was this that was the actual bug.  Any result that was 1st on the list of results would not change state, so we fixed this as well.

And with that, we released OpenELIS\Global v2.7.3.

Tuesday, September 11, 2012

Improving User Experience: The Audit Trail

In August, we finished initial work on a new OpenELIS feature: the Viewable Audit Trail. Jan wrote about the Audit Trail, explaining the impetus and the first phase of development. In short, it came out of discussions with our colleagues in Cote d'Ivoire about ways to better troubleshoot problems, both with samples and the OpenELIS system itself.

As part of an on-going process to improve the OpenELIS user experience, I've been thinking about how to best implement some features that will make the system easier to use and give users faster, more intuitive ways to get their work done. So as part of rolling out the Audit Trail, I took the opportunity to use it as a test case for some new features.

A picture can say a thousand words, so here's a quick comparison of the "old" and "new" versions:

Old Audit Trail

New Audit Trail
We've done a few things here:

  • Added summary information at the top: the order/accession number, the date it was created and the current status.
  • Just below that is a new row with tools for sorting through the audit trail. It shows the number of "entries" in the database and allows for filtering through the entries by searching of by selecting a category. Very useful for an order that may have hundreds of rows in the database.
  • The table itself has a new, more modern look and can be sorted. 
If a picture is worth a thousand words, surely a video is worth even more. Here's a short screencast that shows these improvements in action.


We know OpenELIS users have a lot to do. All the user experience (UX) improvements are intended to the user's goal (in this case, understanding where a problem may have occurred) easier, faster and, dare we say, more pleasurable experience. We hope that being able to parse through tables more quickly and getting better feedback from the system will lead to significant saving in time spent per order and a reduction in the error rates.

We're making all this happen by employing some new and powerful software frameworks. The new Audit Trail UI relies on:
  • jQuery - an open-source Javascript library that has all sorts of tools for "client-side scripting". That simply means being able to do things on screen after the page is loaded. For example, changing the sorting order of a table.
  • DataTables - an open-source jQuery plug-in that handles the interactions to the HTML table.
  • Bootstrap - an open-source CSS framework from the good folks at Twitter. It's a flexible and efficient way to build UI elements in web apps that work across all major browsers.
The updated Audit Trail will be part of the next set of OpenELIS releases. And the UI will soon be applied to other sections of OpenELIS.

As always, please let us know if you have any thoughts or questions.

- Mark