Friday, December 7, 2012

OpenELIS Global Release 2.8.1

The latest OpenELIS release 2.8.1 now available. This release includes the following: 
  • Virology analyzer bug fix: Some workflows resulted in the analyzer DNA PCR test being canceled 
  • Updated ELISA algorithm (reflex test) for Serology: The Genie II, Western Blot, P24, and Bioline tests were deactivated and replaced by Innolia 
  • Viral Load formatting updated to match specifications from RetroCI 
  • Result Validation bug fix: Search by lab no. was not working in result validation if there was only one result for the lab no.
Please send us any feedback you may have about the release.

Friday, October 26, 2012

OpenELIS Global Release 2.8

The latest OpenELIS release 2.8 now available. The release addresses the following priorities and issues:
  • Test catalog updates, including reflex tests for CI LNSP, CI IPCI
  • Viewable audit trail for application events, such as by order, by user, etc.
  • Nonconformity reporting enhancements
  • Add referring site names/codes for drop down selection on forms
  • Improved support for the selection between multiple reflex test algorithms or to allow for the selection of a different test then the default test in the algorithm 
  • Order view (consulter)
  • Test autoselection based on PNPEC directives 
  • Modify or view order through search by reception date
  • Reflex testing algorithms (ELISA)
  • Capture of clinical patient data and use in raw data export
  • Specific patient reports, non-conformity report, raw data export, and internal reports for CI LNSP
  • Enhance internal report with LNSP guidance
  • Support for Virtual Machine deployments and upgrades
  • Integrated automated backups
  • Additional analyzers mapped for automatic import of results
  • Bug fixes
Please send us any feedback you may have about the release.

Friday, September 28, 2012

OpenELIS Cote d'Ivoire Progress

I wanted to take a moment to update you on our progress on the priorities for this upcoming release of OpenELIS in Cote d’Ivoire.  For release of OpenELIS 2.8, we are focused on finalizing functionality for LNSP laboratory based on the validation testing and additional requirements we received from our trip in July.  In addition to that, we have been hard at work on the following additional high priority items:

       Audit Trail (traceability) of all steps through the workflow, status changes, and values for orders, samples, patients, tests, results, and reports

More on the specifics of the audit trail are written up in several blog posts here:    (Your feedback to improve this feature is more than welcome!)

       System stability improvements, including the elimination of the “it’s not your fault” issues, and an improved standard server build method that will drastically ease and improve the installation of OpenELIS
       Dynamic selection of reflex test algorithms and tests within the algorithm
       Non-conformity reporting and workflow improvements
       Making calculated values display as text rather than as a data entry field
       Additional analyzer interfaces
       Updating test catalogs for LNSP and IPCI
       Updating meta-data for lookup values for LNSP and IPCI
       Overall usability improvements
       Text revisions/corrections and bug fixes

We have decided to delay the OpenELIS 2.8 release in order to give us additional time for testing of the finalized features for this release, and to align it better with our plan for training in the last of October/first of November.  We plan to release 2.8 at end of day on Friday, 26th October.

In order to not delay urgent improvements though, we released OpenELIS 2.7.3 as an interim patch for Retro-CI specifically.  This release addressed the issue with CD3 results not appearing in biological validation after they were accepted by the lab technician.  I wrote a blog post about our troubleshooting of this issue and the resolution with the release:

Lastly, for up to date write-ups on our work on OpenELIS, you can read the blog: and view our roadmap:  You can also receive any new postings to the blog automatically in your email.  Just subscribe with your email address using the field titled “Follow By Email” on the right hand side of the screen on the blog.  In addition, we’d love to hear your feedback for what kind of information you would like us to write about on this blog to keep you informed and have your comments on!

Tuesday, September 25, 2012

Superstitious behaviors in troubleshooting bugs!

Our users at Retro-CI alerted us to a bug that needed an urgent release from us to patch.  

The bug originally appeared as though when a user accepted CD3 and CD4 results automatically imported from an analyzer, only the CD4 results then appeared in the biological validation screen.  Normally all accepted results should then appear in the validation screen after acceptance by a technician.   When the user checked the manual results entry screen, the already accepted CD3 results appeared on that screen ready for results entry, but with the imported value already entered.  To solve this problem and get the results to show up in validation, it initially appeared that the user would need to select a checkbox to flag as "analyzer result".  Once this checkbox was checked for this rogue result, the result suddenly appeared on the biological validation screen.  Since checking this box led to the result suddenly appearing in the proper spot, the assumption that this was the behavior that fixed the problem with the result.

However, sometimes when an outcome is the desired result, it is easy to believe the last action taken caused that result -- like superstitious beliefs -- so you repeat that action again and again hoping for the same outcome.  Like throwing spilled salt over your shoulder to make good things happen in life, users will continue to interact with software in a certain manner believing that it makes the software work in a certain way.  In troubleshooting bugs, it's also easy to focus on some user action as the fix (or the cause) of some outcome and sometimes make it difficult to determine the true issue at hand.

In this case, we knew that the fact the result got stuck in results entry rather than moving to validation step had something to do with the "state" of the test result.  As samples move through the workflow in OpenELIS, the software keeps track of the status or state of the sample, the tests ordered, and the results of those tests in the workflow.  So when a test result first enters the system, it appears as a state of "not accepted by technician".  After the technician accepts the analyzer import of the results, those results change to a state of "accepted by technician".  Only those results with a state of "accepted by technician" will appear in validation.

We could see that once a technician accepted the result, the state for the CD3 results did not change and still showed "not accepted by technician" - even though we knew for a fact that it was. We knew from the bug report that users believed that the actual clicking of this checkbox was what was changing the state to "accepted" so it would move into validation.  But after some troubleshooting, we realized that it was actually the changing of the record within the results entry screen that triggered the state change.  That could include checking the checkbox, changing the results value, or anything on that results entry screen would cause the record to be updated upon saving and thus trigger the state change.  So it was superstitious behavior because the checkbox really had nothing to do with the state change.

As an additional note of the troubleshooting, we realized it wasn't only with CD3 results, but users only saw this because CD3 was always 1st on the list of the results and it was this that was the actual bug.  Any result that was 1st on the list of results would not change state, so we fixed this as well.

And with that, we released OpenELIS\Global v2.7.3.

Tuesday, September 11, 2012

Improving User Experience: The Audit Trail

In August, we finished initial work on a new OpenELIS feature: the Viewable Audit Trail. Jan wrote about the Audit Trail, explaining the impetus and the first phase of development. In short, it came out of discussions with our colleagues in Cote d'Ivoire about ways to better troubleshoot problems, both with samples and the OpenELIS system itself.

As part of an on-going process to improve the OpenELIS user experience, I've been thinking about how to best implement some features that will make the system easier to use and give users faster, more intuitive ways to get their work done. So as part of rolling out the Audit Trail, I took the opportunity to use it as a test case for some new features.

A picture can say a thousand words, so here's a quick comparison of the "old" and "new" versions:

Old Audit Trail

New Audit Trail
We've done a few things here:

  • Added summary information at the top: the order/accession number, the date it was created and the current status.
  • Just below that is a new row with tools for sorting through the audit trail. It shows the number of "entries" in the database and allows for filtering through the entries by searching of by selecting a category. Very useful for an order that may have hundreds of rows in the database.
  • The table itself has a new, more modern look and can be sorted. 
If a picture is worth a thousand words, surely a video is worth even more. Here's a short screencast that shows these improvements in action.

We know OpenELIS users have a lot to do. All the user experience (UX) improvements are intended to the user's goal (in this case, understanding where a problem may have occurred) easier, faster and, dare we say, more pleasurable experience. We hope that being able to parse through tables more quickly and getting better feedback from the system will lead to significant saving in time spent per order and a reduction in the error rates.

We're making all this happen by employing some new and powerful software frameworks. The new Audit Trail UI relies on:
  • jQuery - an open-source Javascript library that has all sorts of tools for "client-side scripting". That simply means being able to do things on screen after the page is loaded. For example, changing the sorting order of a table.
  • DataTables - an open-source jQuery plug-in that handles the interactions to the HTML table.
  • Bootstrap - an open-source CSS framework from the good folks at Twitter. It's a flexible and efficient way to build UI elements in web apps that work across all major browsers.
The updated Audit Trail will be part of the next set of OpenELIS releases. And the UI will soon be applied to other sections of OpenELIS.

As always, please let us know if you have any thoughts or questions.

- Mark

Friday, August 17, 2012

What makes things interesting - Dates

This will be the first in a series of blogs about some of the issues we run into when we are developing OpenELIS.

Before you read any further think of three different ways you can write the date of July 25th, 2012 for a French speaking audience.

Just to show off I'll give you four ways:

25 juil 2012
25 juillet 2012

These are all understandable by French speakers, or English speakers who know a little French (C'est moi). The problem is to make them understandable to OpenELIS.

We had a recent bug which was caused by a change in the date format of one of the analyzers we support and the fix was to have OpenELIS understand both the original format (25-07-2012) and the new format (25 juil 2012).

The programming language we use for OpenELIS, Java, has very good support for figuring out what the date is but you have to tell it what the format is first.  For instance, for the original format you would tell it that the format is the day followed by a dash followed by the month followed by a dash followed by a four digit year.  If you gave it 25/7/2012 it would not know what to do.

The format for 25-07-2012 is day-month-year
The format for 25 juil 2012 is day abbreviated month year

The first job is to figure out which date format has been sent by the analyzer, there are a couple of different ways to do this but the easiest is just to see if the date has a '-' in it, if it does then it is the original or if it does not then it is the new format.

That was easy.  But it still doesn't work for the new format.

We know which format to use but Java still can't understand '25 juil 2012'.  The reason is that Java thinks that 'juil.' is the French abbreviation for July, not 'juil'.  The extra '.' makes all the difference in the world.  That's not hard to fix, we can just add a '.' to the month and now Java is happy.  It will be able to handle the date which was causing the problem and everything should be good.  Until next March, the abbreviation for that month is 'mars', with no '.' at the end.

Fortunately that is not a hard fix and we didn't have to wait until next March to discover that only some months have a '.' at the end of the abbreviation.

Dates are very hard to get right, it is one of those things that are relatively easy for humans and very hard for computers.

After all we all know 05/10/2012 means, except that I'm not going to tell you if it was for a French speaker or an English speaker.


Thursday, August 16, 2012

OpenELIS Global release 2.7.1

The latest OpenELIS release is now available for installation.  This release is version 2.7.1 (build 3200) and will most directly affect the Retro-CI implementation.  The release addresses the following priorities and issues:

Features and Updates

  • Updating the name in the menus of the Cobas Taqman DNA PCR and Viral Load analyzers
  • Creating the ability for lab techs to choose not to import test results, thus not creating a test request if it was run unnecessarily
  • Notifying the user when the analyzer import is not saved correctly
  • Functional prototype of the Audit Trail feature to allow administrators to view step by step status changes to each lab order, sample, and test request (See the blog post here)
  • New role to set explicit permissions to the Audit Trail feature

Bug Fixes

  • Correcting the lab no. LART32667 to not display continuously in the analyzer import screen
  • Correcting the transfer of lab results from the Cobas Taqman DNA PCR
  • Fixed Fascaibur import bug to address the need to manually enter CD4 result.
  • Fixed bug that prevented the non-conformity report by lab section and reason from generating. 
Please send us any feedback you may have about the release.  We will follow up soon to outline our priorities for the next release.

Wednesday, August 15, 2012

Improving Error Messages to the User

Jan talked about the work we've been doing to better isolate the causes of the "Grey Screen of Death" — the error page that appears when there's a problem with the OpenELIS system. She mentioned the frustration with the vague "it's not your fault" language that appeared on the GSOD page.

Old "GSOD" error page in French:

Old "GSOD" error page in English:

The original intent was to assure users, particularly those without a lot of experience with such systems, that they didn't "cause" this problem hence the "it's not your fault" language.

While that sounds good in theory, we learned that what's most important is to figure out what went wrong and get it fixed. By providing more context when someone sees a "GSOD" error, we can get better feedback on what happened. This can make the process of auditing what went wrong much easier for the on-site system administrator and, ultimately, the development team.

To that end, we've redesigned the "GSOD" page to grab details about the user's browser, path through the system, etc. We also tried to include clear action steps that are general enough to make sense at any lab that uses OpenELIS. We also made it a little more appealing to the eyes:

First off, you'll see that the error message now keeps the OpenELIS "header"—that section at the top of the page with the logo and the blue background. Compared to the old version that just had the gray error box on a blank page, this version may be a little less jarring to the user.

The top section lists steps that the user should take to help report this problem. At the bottom is system information that can easily be copy and pasted into an email or printed. Not shown in this screenshot is the listing for the previous page the user was on before they saw this error—this can be particularly helpful in understanding what happened.

This is a start, but we're doing more work to make the (hopefully infrequent) encounter with system errors more efficient. We'll be adding a separate 404 "Page not found" error message that will have specific reporting instructions. We're also thinking of what, if any, part of the server error log could be included. While that might be helpful, security and privacy remain paramount - we don't want to expose any details that could be used to attack the system if an unauthorized user gained access.

If you have any questions or comments, let us know. And if you think of a better name for this new friendlier, more helpful page than "Grey Screen of Death," we'd be happy to using a term that's a little less morbid.

Friday, August 10, 2012

The Grey Screen of Death

In the previous post I mentioned that our team is focused on fixing a long-standing ever-elusive bug that has consistently been very disruptive to end users (users only see the message "It's not your fault").  One of our implementations is consistently seeing an increase in this message that can only be resolved by stopping the web application and restarting it.  Due to the ability of this bug to completely disrupt the entire lab, and up to this point our inability to figure out what was causing this bug, we gave this bug the name of "the grey screen of death" - alluding to a common bug that computer users see when their computer system crashes and all they see is a "blue screen of death" and must restart their systems.

Current Grey Screen of Death message (in french):
The grey screen of death has been so elusive that up until now our troubleshooting process hasn't been able to determine what actually causes it - asking users what steps they took, looking at the logs, attempting to replicate the bug - none of it made sense or helped us solve it.  All we knew was that this error was supposed to appear when there was an uncaught exception in the system (for the technical folks - we put this in place so stack traces never printed to the browser).  Theoretically, the user should have never seen it.  Unfortunately, in our development priorities, we never found time to improve our error pages to give the user more information to help us help them.

This week we finally found some clues and have put in fixes for what we think is causing the problem. First, we figured out that this error appears for both uncaught exceptions (system errors) and 404 page errors.  So we've separated those out to be two different error screens.

For the uncaught exceptions, the problem appears to come from two different workflows:

  1. Lab test results are automatically imported from an analyzer after a sample was marked non-conforming.  In other words, no sample entry was performed in the system but a non-conformity event was entered for the sample.  When the results are saved, the user is told that it is successful, but it wasn't.  The analyzer result reappeared on the screen.
  2. Prior to sample entry the analyzer imports of two different sample types are accepted. When the 2nd import is saved, it causes the grey screen of death.  This particular workflow puts the application into an unstable state that causes all of the users in the system to experience the grey screen of death until the system is restarted.  For the technical folks - This is due to Hibernate not knowing what to do with a transient object that is created when it tries to flush the cache an exception is thrown and the object remains in the cache.  The system is unable to remove it from the cache so every Hibernate action continues to cause exceptions.
The reason that it has been so difficult to troubleshoot is that the logs showed us events well after the cause because the whole application became unstable.

Along with determining the cause of the bug, our users adamantly told us that they needed better feedback from the screen when they receive any type of error.  Simply stating "It's not your fault" only frustrated them because they wanted to know what was happening to the system and how to get help to solve it.  So we spent time improving this.  I will talk about those improvements in the next post.

Thursday, August 9, 2012

Viewable Audit Trail

In our current development iteration towards an unscheduled release of OpenELIS 2.7.1 and scheduled release 2.8, we have focused development over the last couple of weeks on two very important scopes of work:

  • Creating a viewable audit trail of the order and sample/specimen events 
  • Fixing a long-standing ever-elusive bug that has consistently been very disruptive to end users (users only see the message "It's not your fault")

Viewable Audit Trail
We've known for awhile that it would be helpful for administrators and users to be able to view all of the events that have occurred for patient samples.  In January we started talking with our colleagues in Cote d'Ivoire about how this feature would assist them in their troubleshooting any issues that cropped up during the use of the software.  After our most recent trip last month, it was clear that this was the most important next feature to add to OpenELIS.  Many times users will report a problem with the software, but the system support team has no way of determining whether this was actually a system error that caused the problem, or if the state of the order/sample/test is actually different than the user expected because of the way that the user has interacted with the system.  We were told by users in the lab that these types of things are causing the majority of their work, so we hope that this feature will provide a big improvement to reducing this workload.

First phase of development for 2.7.1
The first phase of development includes the following: 

  • Researching all of the events that should occur and making certain that they are being added to the log in the database.  Events for orders, samples, and tests should include the following 
    • Order creation (Sample Entry)
    • Patient creation (Patient Entry)*
    • Any changes to the patient information (demographics, medical history, etc.) *
    • Status changes for an order (created, testing started, testing finished)
    • Adding or removing samples from an order *
    • Adding or canceling test requests from a sample in an order *
    • Status changes for a test request (not started, test canceled, technical acceptance/rejection, biologist acceptance/rejection, result finalized)
    • Nonconformity events for orders or for samples
    • Addition of notes *
    • Referrals to other laboratories and the receipt of those results *
    • Patient reports generation
  • Developing the code to access all of the events currently being captured
  • Developing the code to group these events into a "trail" that would make sense to a user
  • Developing the basic user interface to view the audit trail
[* indicates those events that are not currently being tracked and/or displayed in the audit trail for this release.]

What we found -

  • We unfortunately found that not all of the events were being captured!  So we are fixing this.
  • For some of the update events, only the old values were being captured - not the new updated value.  Not very helpful to have only the old values, so we are fixing this for those events already being recorded.  We won't be able to reconstruct the data for previous events, but in going forward this will work correctly.
  • We discovered we need to have a role available for system administrators to be able to configure user permissions to view the audit trail.  Each installation may not want all users to be able to view this audit trail. We implemented and simply called this new permission 'Audit Trail'.
Current view today on our development server:

Next phase of development for 2.8

  • Coding for the events that are not currently being capture or displayed.  Those events in the list above with the asterisk (*).
  • Improve the user interface to make the trail more readable to the user, including, making the table sortable by the user and filtering for types of events
  • Possibly some of the feedback we receive from the first users of 2.7.1 (Retro-CI!)

Future development for X.X?
Eventually we would want to implement as part of the QA dashboard and include reports that might indicate potential issues in the events.  Secondly we want to display the tracking data exchanges between systems, such as, aggregate reports between clinical and reference laboratories, patient demographic queries to a master patient index or other system, electronic results reporting, etc. 

But that's in the future!

Tuesday, May 15, 2012

Malaria Support

We are pleased to announce that iSanté and OpenELIS have been updated to now support Malaria care & treatment and Malaria surveillance reporting.  The specific support available within the software includes the following:
  • Malaria data elements added to the primary care and OBGYN forms in iSanté
  • Malaria reports available in iSanté, including: prevalence, tested, and treated counts in primary care; counts of pregnant women tested, diagnosed, and treated;  
  • Updated ART eligibility criteria in iSanté to match the national care directives
  • Malaria tests added to the OpenELIS test catalog
  • Electronic exchange of Malaria test results from OpenELIS to iSanté
  • Malaria testing surveillance report in OpenELIS of both positive and negative test result counts for a specified time period – in XML format to be delivered electronically
  • Malaria case surveillance reporting in OpenELIS of all positive test results – electronic delivery in HL7 CDA format
  • Bug fixes in OpenELIS (details in the change document linked below)
  • Administration update in iSanté to show the Linux console upon boot
  • Updated version numbering scheme for iSanté (details in the change document linked below)
Change document for the iSanté release is at

We’re excited about this new release of iSanté and OpenELIS, and are thankful for all of the energy invested by you and our partners for this release. We look forward to any feedback you may have.


What a boring first post...!

This blog came into being to bring together announcements about three lines of activity - three different implementations of OpenELIS Global ( in three countries


Cote d'Ivoire: